00:00:00.001 Started by upstream project "autotest-per-patch" build number 126261 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.040 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.056 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.082 Using shallow fetch with depth 1 00:00:00.082 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.082 > git --version # timeout=10 00:00:00.125 > git --version # 'git version 2.39.2' 00:00:00.125 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.173 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.173 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.941 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.952 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.964 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.964 > git config core.sparsecheckout # timeout=10 00:00:03.976 > git read-tree -mu HEAD # timeout=10 00:00:03.993 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.014 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.014 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.103 [Pipeline] Start of Pipeline 00:00:04.117 [Pipeline] library 00:00:04.118 Loading library shm_lib@master 00:00:04.119 Library shm_lib@master is cached. Copying from home. 00:00:04.137 [Pipeline] node 00:00:04.144 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.145 [Pipeline] { 00:00:04.153 [Pipeline] catchError 00:00:04.154 [Pipeline] { 00:00:04.163 [Pipeline] wrap 00:00:04.170 [Pipeline] { 00:00:04.175 [Pipeline] stage 00:00:04.177 [Pipeline] { (Prologue) 00:00:04.359 [Pipeline] sh 00:00:04.644 + logger -p user.info -t JENKINS-CI 00:00:04.664 [Pipeline] echo 00:00:04.665 Node: GP11 00:00:04.671 [Pipeline] sh 00:00:04.989 [Pipeline] setCustomBuildProperty 00:00:05.002 [Pipeline] echo 00:00:05.003 Cleanup processes 00:00:05.008 [Pipeline] sh 00:00:05.287 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.287 2430707 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.296 [Pipeline] sh 00:00:05.573 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.573 ++ grep -v 'sudo pgrep' 00:00:05.573 ++ awk '{print $1}' 00:00:05.573 + sudo kill -9 00:00:05.573 + true 00:00:05.586 [Pipeline] cleanWs 00:00:05.594 [WS-CLEANUP] Deleting project workspace... 00:00:05.594 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.600 [WS-CLEANUP] done 00:00:05.603 [Pipeline] setCustomBuildProperty 00:00:05.615 [Pipeline] sh 00:00:05.890 + sudo git config --global --replace-all safe.directory '*' 00:00:05.975 [Pipeline] httpRequest 00:00:06.009 [Pipeline] echo 00:00:06.011 Sorcerer 10.211.164.101 is alive 00:00:06.019 [Pipeline] httpRequest 00:00:06.023 HttpMethod: GET 00:00:06.023 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.024 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.037 Response Code: HTTP/1.1 200 OK 00:00:06.037 Success: Status code 200 is in the accepted range: 200,404 00:00:06.038 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.910 [Pipeline] sh 00:00:09.192 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.209 [Pipeline] httpRequest 00:00:09.233 [Pipeline] echo 00:00:09.235 Sorcerer 10.211.164.101 is alive 00:00:09.244 [Pipeline] httpRequest 00:00:09.249 HttpMethod: GET 00:00:09.249 URL: http://10.211.164.101/packages/spdk_8c20d24e09a5c50b0d3ea114e2a0266b40361b1e.tar.gz 00:00:09.250 Sending request to url: http://10.211.164.101/packages/spdk_8c20d24e09a5c50b0d3ea114e2a0266b40361b1e.tar.gz 00:00:09.259 Response Code: HTTP/1.1 200 OK 00:00:09.260 Success: Status code 200 is in the accepted range: 200,404 00:00:09.260 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8c20d24e09a5c50b0d3ea114e2a0266b40361b1e.tar.gz 00:01:03.779 [Pipeline] sh 00:01:04.062 + tar --no-same-owner -xf spdk_8c20d24e09a5c50b0d3ea114e2a0266b40361b1e.tar.gz 00:01:06.606 [Pipeline] sh 00:01:06.891 + git -C spdk log --oneline -n5 00:01:06.891 8c20d24e0 spdk_nvme_perf: allocate buffers from socket_id reported by ctrlr 00:01:06.891 e9e51ebfe nvme/pcie: allocate cq from device-local numa node's memory 00:01:06.891 fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:01:06.891 47ca8c1aa nvme: populate socket_id for rdma controllers 00:01:06.891 c1860effd nvme: populate socket_id for tcp controllers 00:01:06.904 [Pipeline] } 00:01:06.922 [Pipeline] // stage 00:01:06.932 [Pipeline] stage 00:01:06.935 [Pipeline] { (Prepare) 00:01:06.955 [Pipeline] writeFile 00:01:06.973 [Pipeline] sh 00:01:07.257 + logger -p user.info -t JENKINS-CI 00:01:07.302 [Pipeline] sh 00:01:07.583 + logger -p user.info -t JENKINS-CI 00:01:07.598 [Pipeline] sh 00:01:07.882 + cat autorun-spdk.conf 00:01:07.882 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.882 SPDK_TEST_NVMF=1 00:01:07.882 SPDK_TEST_NVME_CLI=1 00:01:07.882 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.882 SPDK_TEST_NVMF_NICS=e810 00:01:07.882 SPDK_TEST_VFIOUSER=1 00:01:07.882 SPDK_RUN_UBSAN=1 00:01:07.882 NET_TYPE=phy 00:01:07.889 RUN_NIGHTLY=0 00:01:07.895 [Pipeline] readFile 00:01:07.926 [Pipeline] withEnv 00:01:07.929 [Pipeline] { 00:01:07.945 [Pipeline] sh 00:01:08.230 + set -ex 00:01:08.230 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:08.230 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.230 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.230 ++ SPDK_TEST_NVMF=1 00:01:08.230 ++ SPDK_TEST_NVME_CLI=1 00:01:08.230 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.230 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.230 ++ SPDK_TEST_VFIOUSER=1 00:01:08.230 ++ SPDK_RUN_UBSAN=1 00:01:08.230 ++ NET_TYPE=phy 00:01:08.230 ++ RUN_NIGHTLY=0 00:01:08.230 + case $SPDK_TEST_NVMF_NICS in 00:01:08.230 + DRIVERS=ice 00:01:08.230 + [[ tcp == \r\d\m\a ]] 00:01:08.230 + [[ -n ice ]] 00:01:08.230 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:08.230 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:08.230 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:08.230 rmmod: ERROR: Module irdma is not currently loaded 00:01:08.230 rmmod: ERROR: Module i40iw is not currently loaded 00:01:08.230 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:08.230 + true 00:01:08.230 + for D in $DRIVERS 00:01:08.230 + sudo modprobe ice 00:01:08.230 + exit 0 00:01:08.239 [Pipeline] } 00:01:08.258 [Pipeline] // withEnv 00:01:08.264 [Pipeline] } 00:01:08.283 [Pipeline] // stage 00:01:08.294 [Pipeline] catchError 00:01:08.296 [Pipeline] { 00:01:08.313 [Pipeline] timeout 00:01:08.314 Timeout set to expire in 50 min 00:01:08.316 [Pipeline] { 00:01:08.332 [Pipeline] stage 00:01:08.334 [Pipeline] { (Tests) 00:01:08.352 [Pipeline] sh 00:01:08.636 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.636 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.636 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.636 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:08.636 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.636 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.636 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:08.636 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.636 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.636 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.636 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:08.636 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.636 + source /etc/os-release 00:01:08.636 ++ NAME='Fedora Linux' 00:01:08.636 ++ VERSION='38 (Cloud Edition)' 00:01:08.636 ++ ID=fedora 00:01:08.636 ++ VERSION_ID=38 00:01:08.636 ++ VERSION_CODENAME= 00:01:08.636 ++ PLATFORM_ID=platform:f38 00:01:08.636 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:08.636 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.636 ++ LOGO=fedora-logo-icon 00:01:08.636 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:08.636 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.636 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:08.636 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.636 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.636 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.636 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:08.636 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.636 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:08.636 ++ SUPPORT_END=2024-05-14 00:01:08.636 ++ VARIANT='Cloud Edition' 00:01:08.636 ++ VARIANT_ID=cloud 00:01:08.636 + uname -a 00:01:08.636 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:08.636 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:09.570 Hugepages 00:01:09.570 node hugesize free / total 00:01:09.570 node0 1048576kB 0 / 0 00:01:09.570 node0 2048kB 0 / 0 00:01:09.570 node1 1048576kB 0 / 0 00:01:09.570 node1 2048kB 0 / 0 00:01:09.570 00:01:09.570 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:09.570 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:09.570 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:09.570 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:09.570 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:09.570 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:09.570 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:09.570 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:09.570 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:09.570 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:09.829 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:09.829 + rm -f /tmp/spdk-ld-path 00:01:09.829 + source autorun-spdk.conf 00:01:09.829 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.829 ++ SPDK_TEST_NVMF=1 00:01:09.829 ++ SPDK_TEST_NVME_CLI=1 00:01:09.829 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.829 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.829 ++ SPDK_TEST_VFIOUSER=1 00:01:09.829 ++ SPDK_RUN_UBSAN=1 00:01:09.829 ++ NET_TYPE=phy 00:01:09.829 ++ RUN_NIGHTLY=0 00:01:09.829 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.829 + [[ -n '' ]] 00:01:09.829 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.829 + for M in /var/spdk/build-*-manifest.txt 00:01:09.829 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.829 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.829 + for M in /var/spdk/build-*-manifest.txt 00:01:09.829 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.829 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.829 ++ uname 00:01:09.829 + [[ Linux == \L\i\n\u\x ]] 00:01:09.829 + sudo dmesg -T 00:01:09.829 + sudo dmesg --clear 00:01:09.829 + dmesg_pid=2431382 00:01:09.829 + [[ Fedora Linux == FreeBSD ]] 00:01:09.829 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.829 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.829 + sudo dmesg -Tw 00:01:09.829 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.829 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.829 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.829 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.829 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.829 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.829 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.829 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.829 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.829 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.829 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.829 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.829 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.829 Test configuration: 00:01:09.829 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.829 SPDK_TEST_NVMF=1 00:01:09.829 SPDK_TEST_NVME_CLI=1 00:01:09.829 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.829 SPDK_TEST_NVMF_NICS=e810 00:01:09.829 SPDK_TEST_VFIOUSER=1 00:01:09.829 SPDK_RUN_UBSAN=1 00:01:09.829 NET_TYPE=phy 00:01:09.829 RUN_NIGHTLY=0 00:37:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:09.829 00:37:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.829 00:37:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.829 00:37:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.829 00:37:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.829 00:37:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.829 00:37:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.829 00:37:44 -- paths/export.sh@5 -- $ export PATH 00:01:09.829 00:37:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.829 00:37:44 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:09.829 00:37:44 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:09.829 00:37:44 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721083064.XXXXXX 00:01:09.829 00:37:44 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721083064.OZC12V 00:01:09.829 00:37:44 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:09.829 00:37:44 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:09.829 00:37:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:09.829 00:37:44 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:09.829 00:37:44 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.829 00:37:44 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:09.829 00:37:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:09.829 00:37:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.829 00:37:44 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:09.829 00:37:44 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:09.829 00:37:44 -- pm/common@17 -- $ local monitor 00:01:09.829 00:37:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.829 00:37:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.829 00:37:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.829 00:37:44 -- pm/common@21 -- $ date +%s 00:01:09.829 00:37:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.829 00:37:44 -- pm/common@21 -- $ date +%s 00:01:09.829 00:37:44 -- pm/common@25 -- $ sleep 1 00:01:09.829 00:37:44 -- pm/common@21 -- $ date +%s 00:01:09.829 00:37:44 -- pm/common@21 -- $ date +%s 00:01:09.829 00:37:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721083064 00:01:09.829 00:37:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721083064 00:01:09.829 00:37:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721083064 00:01:09.829 00:37:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721083064 00:01:09.829 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721083064_collect-vmstat.pm.log 00:01:09.829 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721083064_collect-cpu-load.pm.log 00:01:09.829 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721083064_collect-cpu-temp.pm.log 00:01:09.829 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721083064_collect-bmc-pm.bmc.pm.log 00:01:10.766 00:37:45 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:10.766 00:37:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.766 00:37:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.766 00:37:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.766 00:37:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:11.025 Mon Jul 15 10:37:45 PM UTC 2024 00:01:11.025 00:37:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:11.025 v24.09-pre-236-g8c20d24e0 00:01:11.025 00:37:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:11.025 00:37:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:11.025 00:37:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:11.025 00:37:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:11.025 00:37:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:11.025 00:37:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.025 ************************************ 00:01:11.025 START TEST ubsan 00:01:11.025 ************************************ 00:01:11.025 00:37:45 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:11.025 using ubsan 00:01:11.025 00:01:11.025 real 0m0.000s 00:01:11.025 user 0m0.000s 00:01:11.025 sys 0m0.000s 00:01:11.025 00:37:45 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:11.025 00:37:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:11.025 ************************************ 00:01:11.025 END TEST ubsan 00:01:11.025 ************************************ 00:01:11.025 00:37:45 -- common/autotest_common.sh@1142 -- $ return 0 00:01:11.025 00:37:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:11.025 00:37:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:11.025 00:37:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:11.025 00:37:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:11.025 00:37:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:11.025 00:37:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:11.025 00:37:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:11.025 00:37:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:11.025 00:37:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:11.025 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:11.025 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.283 Using 'verbs' RDMA provider 00:01:21.935 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:31.912 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:31.912 Creating mk/config.mk...done. 00:01:31.912 Creating mk/cc.flags.mk...done. 00:01:31.912 Type 'make' to build. 00:01:31.912 00:38:05 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:31.912 00:38:05 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:31.912 00:38:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.912 00:38:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.912 ************************************ 00:01:31.912 START TEST make 00:01:31.912 ************************************ 00:01:31.912 00:38:06 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:31.912 make[1]: Nothing to be done for 'all'. 00:01:33.307 The Meson build system 00:01:33.307 Version: 1.3.1 00:01:33.307 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:33.307 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.307 Build type: native build 00:01:33.307 Project name: libvfio-user 00:01:33.307 Project version: 0.0.1 00:01:33.307 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:33.307 C linker for the host machine: cc ld.bfd 2.39-16 00:01:33.307 Host machine cpu family: x86_64 00:01:33.307 Host machine cpu: x86_64 00:01:33.307 Run-time dependency threads found: YES 00:01:33.307 Library dl found: YES 00:01:33.307 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.307 Run-time dependency json-c found: YES 0.17 00:01:33.307 Run-time dependency cmocka found: YES 1.1.7 00:01:33.307 Program pytest-3 found: NO 00:01:33.307 Program flake8 found: NO 00:01:33.307 Program misspell-fixer found: NO 00:01:33.307 Program restructuredtext-lint found: NO 00:01:33.307 Program valgrind found: YES (/usr/bin/valgrind) 00:01:33.307 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.307 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.307 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.307 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:33.307 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:33.307 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:33.307 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:33.307 Build targets in project: 8 00:01:33.307 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:33.307 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:33.307 00:01:33.307 libvfio-user 0.0.1 00:01:33.307 00:01:33.307 User defined options 00:01:33.307 buildtype : debug 00:01:33.307 default_library: shared 00:01:33.307 libdir : /usr/local/lib 00:01:33.307 00:01:33.307 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:33.886 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:34.149 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:34.149 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:34.149 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:34.149 [4/37] Compiling C object samples/null.p/null.c.o 00:01:34.149 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:34.149 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:34.149 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:34.149 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:34.149 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:34.149 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:34.149 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:34.149 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:34.149 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:34.149 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:34.411 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:34.411 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:34.411 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:34.411 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:34.411 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:34.411 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:34.411 [21/37] Compiling C object samples/server.p/server.c.o 00:01:34.411 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:34.411 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:34.411 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:34.411 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:34.411 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:34.411 [27/37] Compiling C object samples/client.p/client.c.o 00:01:34.411 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:34.411 [29/37] Linking target samples/client 00:01:34.675 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:34.675 [31/37] Linking target test/unit_tests 00:01:34.675 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:34.675 [33/37] Linking target samples/null 00:01:34.675 [34/37] Linking target samples/gpio-pci-idio-16 00:01:34.675 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:34.675 [36/37] Linking target samples/lspci 00:01:34.675 [37/37] Linking target samples/server 00:01:34.675 INFO: autodetecting backend as ninja 00:01:34.675 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.675 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.618 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.618 ninja: no work to do. 00:01:40.896 The Meson build system 00:01:40.896 Version: 1.3.1 00:01:40.896 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:40.896 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:40.896 Build type: native build 00:01:40.896 Program cat found: YES (/usr/bin/cat) 00:01:40.896 Project name: DPDK 00:01:40.896 Project version: 24.03.0 00:01:40.896 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:40.896 C linker for the host machine: cc ld.bfd 2.39-16 00:01:40.896 Host machine cpu family: x86_64 00:01:40.896 Host machine cpu: x86_64 00:01:40.896 Message: ## Building in Developer Mode ## 00:01:40.896 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.896 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:40.896 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.896 Program python3 found: YES (/usr/bin/python3) 00:01:40.896 Program cat found: YES (/usr/bin/cat) 00:01:40.896 Compiler for C supports arguments -march=native: YES 00:01:40.896 Checking for size of "void *" : 8 00:01:40.896 Checking for size of "void *" : 8 (cached) 00:01:40.896 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:40.896 Library m found: YES 00:01:40.896 Library numa found: YES 00:01:40.896 Has header "numaif.h" : YES 00:01:40.896 Library fdt found: NO 00:01:40.896 Library execinfo found: NO 00:01:40.896 Has header "execinfo.h" : YES 00:01:40.896 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.896 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.896 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.896 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.896 Run-time dependency openssl found: YES 3.0.9 00:01:40.896 Run-time dependency libpcap found: YES 1.10.4 00:01:40.896 Has header "pcap.h" with dependency libpcap: YES 00:01:40.896 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.896 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.896 Compiler for C supports arguments -Wformat: YES 00:01:40.896 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.896 Compiler for C supports arguments -Wformat-security: NO 00:01:40.896 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.896 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.896 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.896 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.896 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.896 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.896 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.896 Compiler for C supports arguments -Wundef: YES 00:01:40.896 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.896 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.896 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.896 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.896 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.896 Program objdump found: YES (/usr/bin/objdump) 00:01:40.896 Compiler for C supports arguments -mavx512f: YES 00:01:40.896 Checking if "AVX512 checking" compiles: YES 00:01:40.896 Fetching value of define "__SSE4_2__" : 1 00:01:40.896 Fetching value of define "__AES__" : 1 00:01:40.896 Fetching value of define "__AVX__" : 1 00:01:40.896 Fetching value of define "__AVX2__" : (undefined) 00:01:40.896 Fetching value of define "__AVX512BW__" : (undefined) 00:01:40.896 Fetching value of define "__AVX512CD__" : (undefined) 00:01:40.896 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:40.896 Fetching value of define "__AVX512F__" : (undefined) 00:01:40.896 Fetching value of define "__AVX512VL__" : (undefined) 00:01:40.896 Fetching value of define "__PCLMUL__" : 1 00:01:40.897 Fetching value of define "__RDRND__" : 1 00:01:40.897 Fetching value of define "__RDSEED__" : (undefined) 00:01:40.897 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.897 Fetching value of define "__znver1__" : (undefined) 00:01:40.897 Fetching value of define "__znver2__" : (undefined) 00:01:40.897 Fetching value of define "__znver3__" : (undefined) 00:01:40.897 Fetching value of define "__znver4__" : (undefined) 00:01:40.897 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.897 Message: lib/log: Defining dependency "log" 00:01:40.897 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.897 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.897 Checking for function "getentropy" : NO 00:01:40.897 Message: lib/eal: Defining dependency "eal" 00:01:40.897 Message: lib/ring: Defining dependency "ring" 00:01:40.897 Message: lib/rcu: Defining dependency "rcu" 00:01:40.897 Message: lib/mempool: Defining dependency "mempool" 00:01:40.897 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.897 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.897 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.897 Compiler for C supports arguments -mpclmul: YES 00:01:40.897 Compiler for C supports arguments -maes: YES 00:01:40.897 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.897 Compiler for C supports arguments -mavx512bw: YES 00:01:40.897 Compiler for C supports arguments -mavx512dq: YES 00:01:40.897 Compiler for C supports arguments -mavx512vl: YES 00:01:40.897 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.897 Compiler for C supports arguments -mavx2: YES 00:01:40.897 Compiler for C supports arguments -mavx: YES 00:01:40.897 Message: lib/net: Defining dependency "net" 00:01:40.897 Message: lib/meter: Defining dependency "meter" 00:01:40.897 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.897 Message: lib/pci: Defining dependency "pci" 00:01:40.897 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.897 Message: lib/hash: Defining dependency "hash" 00:01:40.897 Message: lib/timer: Defining dependency "timer" 00:01:40.897 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.897 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.897 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.897 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.897 Message: lib/power: Defining dependency "power" 00:01:40.897 Message: lib/reorder: Defining dependency "reorder" 00:01:40.897 Message: lib/security: Defining dependency "security" 00:01:40.897 Has header "linux/userfaultfd.h" : YES 00:01:40.897 Has header "linux/vduse.h" : YES 00:01:40.897 Message: lib/vhost: Defining dependency "vhost" 00:01:40.897 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:40.897 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.897 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.897 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.897 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:40.897 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:40.897 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:40.897 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:40.897 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:40.897 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:40.897 Program doxygen found: YES (/usr/bin/doxygen) 00:01:40.897 Configuring doxy-api-html.conf using configuration 00:01:40.897 Configuring doxy-api-man.conf using configuration 00:01:40.897 Program mandb found: YES (/usr/bin/mandb) 00:01:40.897 Program sphinx-build found: NO 00:01:40.897 Configuring rte_build_config.h using configuration 00:01:40.897 Message: 00:01:40.897 ================= 00:01:40.897 Applications Enabled 00:01:40.897 ================= 00:01:40.897 00:01:40.897 apps: 00:01:40.897 00:01:40.897 00:01:40.897 Message: 00:01:40.897 ================= 00:01:40.897 Libraries Enabled 00:01:40.897 ================= 00:01:40.897 00:01:40.897 libs: 00:01:40.897 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.897 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:40.897 cryptodev, dmadev, power, reorder, security, vhost, 00:01:40.897 00:01:40.897 Message: 00:01:40.897 =============== 00:01:40.897 Drivers Enabled 00:01:40.897 =============== 00:01:40.897 00:01:40.897 common: 00:01:40.897 00:01:40.897 bus: 00:01:40.897 pci, vdev, 00:01:40.897 mempool: 00:01:40.897 ring, 00:01:40.897 dma: 00:01:40.897 00:01:40.897 net: 00:01:40.897 00:01:40.897 crypto: 00:01:40.897 00:01:40.897 compress: 00:01:40.897 00:01:40.897 vdpa: 00:01:40.897 00:01:40.897 00:01:40.897 Message: 00:01:40.897 ================= 00:01:40.897 Content Skipped 00:01:40.897 ================= 00:01:40.897 00:01:40.897 apps: 00:01:40.897 dumpcap: explicitly disabled via build config 00:01:40.897 graph: explicitly disabled via build config 00:01:40.897 pdump: explicitly disabled via build config 00:01:40.897 proc-info: explicitly disabled via build config 00:01:40.897 test-acl: explicitly disabled via build config 00:01:40.897 test-bbdev: explicitly disabled via build config 00:01:40.897 test-cmdline: explicitly disabled via build config 00:01:40.897 test-compress-perf: explicitly disabled via build config 00:01:40.897 test-crypto-perf: explicitly disabled via build config 00:01:40.897 test-dma-perf: explicitly disabled via build config 00:01:40.897 test-eventdev: explicitly disabled via build config 00:01:40.897 test-fib: explicitly disabled via build config 00:01:40.897 test-flow-perf: explicitly disabled via build config 00:01:40.897 test-gpudev: explicitly disabled via build config 00:01:40.897 test-mldev: explicitly disabled via build config 00:01:40.897 test-pipeline: explicitly disabled via build config 00:01:40.897 test-pmd: explicitly disabled via build config 00:01:40.897 test-regex: explicitly disabled via build config 00:01:40.897 test-sad: explicitly disabled via build config 00:01:40.897 test-security-perf: explicitly disabled via build config 00:01:40.897 00:01:40.897 libs: 00:01:40.897 argparse: explicitly disabled via build config 00:01:40.897 metrics: explicitly disabled via build config 00:01:40.897 acl: explicitly disabled via build config 00:01:40.897 bbdev: explicitly disabled via build config 00:01:40.897 bitratestats: explicitly disabled via build config 00:01:40.897 bpf: explicitly disabled via build config 00:01:40.897 cfgfile: explicitly disabled via build config 00:01:40.897 distributor: explicitly disabled via build config 00:01:40.897 efd: explicitly disabled via build config 00:01:40.897 eventdev: explicitly disabled via build config 00:01:40.897 dispatcher: explicitly disabled via build config 00:01:40.897 gpudev: explicitly disabled via build config 00:01:40.897 gro: explicitly disabled via build config 00:01:40.897 gso: explicitly disabled via build config 00:01:40.897 ip_frag: explicitly disabled via build config 00:01:40.897 jobstats: explicitly disabled via build config 00:01:40.897 latencystats: explicitly disabled via build config 00:01:40.897 lpm: explicitly disabled via build config 00:01:40.897 member: explicitly disabled via build config 00:01:40.897 pcapng: explicitly disabled via build config 00:01:40.897 rawdev: explicitly disabled via build config 00:01:40.897 regexdev: explicitly disabled via build config 00:01:40.897 mldev: explicitly disabled via build config 00:01:40.897 rib: explicitly disabled via build config 00:01:40.897 sched: explicitly disabled via build config 00:01:40.897 stack: explicitly disabled via build config 00:01:40.897 ipsec: explicitly disabled via build config 00:01:40.897 pdcp: explicitly disabled via build config 00:01:40.897 fib: explicitly disabled via build config 00:01:40.897 port: explicitly disabled via build config 00:01:40.897 pdump: explicitly disabled via build config 00:01:40.897 table: explicitly disabled via build config 00:01:40.897 pipeline: explicitly disabled via build config 00:01:40.897 graph: explicitly disabled via build config 00:01:40.897 node: explicitly disabled via build config 00:01:40.897 00:01:40.897 drivers: 00:01:40.897 common/cpt: not in enabled drivers build config 00:01:40.897 common/dpaax: not in enabled drivers build config 00:01:40.897 common/iavf: not in enabled drivers build config 00:01:40.897 common/idpf: not in enabled drivers build config 00:01:40.897 common/ionic: not in enabled drivers build config 00:01:40.897 common/mvep: not in enabled drivers build config 00:01:40.897 common/octeontx: not in enabled drivers build config 00:01:40.897 bus/auxiliary: not in enabled drivers build config 00:01:40.897 bus/cdx: not in enabled drivers build config 00:01:40.897 bus/dpaa: not in enabled drivers build config 00:01:40.897 bus/fslmc: not in enabled drivers build config 00:01:40.897 bus/ifpga: not in enabled drivers build config 00:01:40.897 bus/platform: not in enabled drivers build config 00:01:40.897 bus/uacce: not in enabled drivers build config 00:01:40.897 bus/vmbus: not in enabled drivers build config 00:01:40.897 common/cnxk: not in enabled drivers build config 00:01:40.897 common/mlx5: not in enabled drivers build config 00:01:40.897 common/nfp: not in enabled drivers build config 00:01:40.897 common/nitrox: not in enabled drivers build config 00:01:40.898 common/qat: not in enabled drivers build config 00:01:40.898 common/sfc_efx: not in enabled drivers build config 00:01:40.898 mempool/bucket: not in enabled drivers build config 00:01:40.898 mempool/cnxk: not in enabled drivers build config 00:01:40.898 mempool/dpaa: not in enabled drivers build config 00:01:40.898 mempool/dpaa2: not in enabled drivers build config 00:01:40.898 mempool/octeontx: not in enabled drivers build config 00:01:40.898 mempool/stack: not in enabled drivers build config 00:01:40.898 dma/cnxk: not in enabled drivers build config 00:01:40.898 dma/dpaa: not in enabled drivers build config 00:01:40.898 dma/dpaa2: not in enabled drivers build config 00:01:40.898 dma/hisilicon: not in enabled drivers build config 00:01:40.898 dma/idxd: not in enabled drivers build config 00:01:40.898 dma/ioat: not in enabled drivers build config 00:01:40.898 dma/skeleton: not in enabled drivers build config 00:01:40.898 net/af_packet: not in enabled drivers build config 00:01:40.898 net/af_xdp: not in enabled drivers build config 00:01:40.898 net/ark: not in enabled drivers build config 00:01:40.898 net/atlantic: not in enabled drivers build config 00:01:40.898 net/avp: not in enabled drivers build config 00:01:40.898 net/axgbe: not in enabled drivers build config 00:01:40.898 net/bnx2x: not in enabled drivers build config 00:01:40.898 net/bnxt: not in enabled drivers build config 00:01:40.898 net/bonding: not in enabled drivers build config 00:01:40.898 net/cnxk: not in enabled drivers build config 00:01:40.898 net/cpfl: not in enabled drivers build config 00:01:40.898 net/cxgbe: not in enabled drivers build config 00:01:40.898 net/dpaa: not in enabled drivers build config 00:01:40.898 net/dpaa2: not in enabled drivers build config 00:01:40.898 net/e1000: not in enabled drivers build config 00:01:40.898 net/ena: not in enabled drivers build config 00:01:40.898 net/enetc: not in enabled drivers build config 00:01:40.898 net/enetfec: not in enabled drivers build config 00:01:40.898 net/enic: not in enabled drivers build config 00:01:40.898 net/failsafe: not in enabled drivers build config 00:01:40.898 net/fm10k: not in enabled drivers build config 00:01:40.898 net/gve: not in enabled drivers build config 00:01:40.898 net/hinic: not in enabled drivers build config 00:01:40.898 net/hns3: not in enabled drivers build config 00:01:40.898 net/i40e: not in enabled drivers build config 00:01:40.898 net/iavf: not in enabled drivers build config 00:01:40.898 net/ice: not in enabled drivers build config 00:01:40.898 net/idpf: not in enabled drivers build config 00:01:40.898 net/igc: not in enabled drivers build config 00:01:40.898 net/ionic: not in enabled drivers build config 00:01:40.898 net/ipn3ke: not in enabled drivers build config 00:01:40.898 net/ixgbe: not in enabled drivers build config 00:01:40.898 net/mana: not in enabled drivers build config 00:01:40.898 net/memif: not in enabled drivers build config 00:01:40.898 net/mlx4: not in enabled drivers build config 00:01:40.898 net/mlx5: not in enabled drivers build config 00:01:40.898 net/mvneta: not in enabled drivers build config 00:01:40.898 net/mvpp2: not in enabled drivers build config 00:01:40.898 net/netvsc: not in enabled drivers build config 00:01:40.898 net/nfb: not in enabled drivers build config 00:01:40.898 net/nfp: not in enabled drivers build config 00:01:40.898 net/ngbe: not in enabled drivers build config 00:01:40.898 net/null: not in enabled drivers build config 00:01:40.898 net/octeontx: not in enabled drivers build config 00:01:40.898 net/octeon_ep: not in enabled drivers build config 00:01:40.898 net/pcap: not in enabled drivers build config 00:01:40.898 net/pfe: not in enabled drivers build config 00:01:40.898 net/qede: not in enabled drivers build config 00:01:40.898 net/ring: not in enabled drivers build config 00:01:40.898 net/sfc: not in enabled drivers build config 00:01:40.898 net/softnic: not in enabled drivers build config 00:01:40.898 net/tap: not in enabled drivers build config 00:01:40.898 net/thunderx: not in enabled drivers build config 00:01:40.898 net/txgbe: not in enabled drivers build config 00:01:40.898 net/vdev_netvsc: not in enabled drivers build config 00:01:40.898 net/vhost: not in enabled drivers build config 00:01:40.898 net/virtio: not in enabled drivers build config 00:01:40.898 net/vmxnet3: not in enabled drivers build config 00:01:40.898 raw/*: missing internal dependency, "rawdev" 00:01:40.898 crypto/armv8: not in enabled drivers build config 00:01:40.898 crypto/bcmfs: not in enabled drivers build config 00:01:40.898 crypto/caam_jr: not in enabled drivers build config 00:01:40.898 crypto/ccp: not in enabled drivers build config 00:01:40.898 crypto/cnxk: not in enabled drivers build config 00:01:40.898 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.898 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.898 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.898 crypto/mlx5: not in enabled drivers build config 00:01:40.898 crypto/mvsam: not in enabled drivers build config 00:01:40.898 crypto/nitrox: not in enabled drivers build config 00:01:40.898 crypto/null: not in enabled drivers build config 00:01:40.898 crypto/octeontx: not in enabled drivers build config 00:01:40.898 crypto/openssl: not in enabled drivers build config 00:01:40.898 crypto/scheduler: not in enabled drivers build config 00:01:40.898 crypto/uadk: not in enabled drivers build config 00:01:40.898 crypto/virtio: not in enabled drivers build config 00:01:40.898 compress/isal: not in enabled drivers build config 00:01:40.898 compress/mlx5: not in enabled drivers build config 00:01:40.898 compress/nitrox: not in enabled drivers build config 00:01:40.898 compress/octeontx: not in enabled drivers build config 00:01:40.898 compress/zlib: not in enabled drivers build config 00:01:40.898 regex/*: missing internal dependency, "regexdev" 00:01:40.898 ml/*: missing internal dependency, "mldev" 00:01:40.898 vdpa/ifc: not in enabled drivers build config 00:01:40.898 vdpa/mlx5: not in enabled drivers build config 00:01:40.898 vdpa/nfp: not in enabled drivers build config 00:01:40.898 vdpa/sfc: not in enabled drivers build config 00:01:40.898 event/*: missing internal dependency, "eventdev" 00:01:40.898 baseband/*: missing internal dependency, "bbdev" 00:01:40.898 gpu/*: missing internal dependency, "gpudev" 00:01:40.898 00:01:40.898 00:01:40.898 Build targets in project: 85 00:01:40.898 00:01:40.898 DPDK 24.03.0 00:01:40.898 00:01:40.898 User defined options 00:01:40.898 buildtype : debug 00:01:40.898 default_library : shared 00:01:40.898 libdir : lib 00:01:40.898 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:40.898 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:40.898 c_link_args : 00:01:40.898 cpu_instruction_set: native 00:01:40.898 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:40.898 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:40.898 enable_docs : false 00:01:40.898 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:40.898 enable_kmods : false 00:01:40.898 max_lcores : 128 00:01:40.898 tests : false 00:01:40.898 00:01:40.898 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.898 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:40.898 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.898 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.898 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.898 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.898 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.898 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.898 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.898 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.898 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.898 [10/268] Linking static target lib/librte_kvargs.a 00:01:40.898 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.898 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.898 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.898 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.898 [15/268] Linking static target lib/librte_log.a 00:01:40.898 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.469 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.729 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:41.729 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:41.729 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:41.729 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.729 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.729 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.729 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.729 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.729 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.729 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.730 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.730 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.730 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.730 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.730 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.730 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.730 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.730 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.730 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.730 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.730 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.730 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.730 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.730 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.730 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.730 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.730 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.730 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.730 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.730 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:41.730 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.730 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.730 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.730 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.730 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.730 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.730 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:41.730 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:41.989 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.989 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.989 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.989 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.989 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:41.989 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.989 [62/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.989 [63/268] Linking static target lib/librte_telemetry.a 00:01:42.254 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.254 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:42.254 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.254 [67/268] Linking target lib/librte_log.so.24.1 00:01:42.254 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.254 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:42.254 [70/268] Linking static target lib/librte_pci.a 00:01:42.254 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:42.514 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:42.514 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:42.514 [74/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.514 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:42.514 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:42.514 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:42.514 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:42.514 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:42.514 [80/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:42.514 [81/268] Linking target lib/librte_kvargs.so.24.1 00:01:42.775 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.775 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:42.775 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:42.775 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:42.775 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:42.775 [87/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:42.775 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:42.775 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:42.775 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.775 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.775 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.775 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:42.775 [94/268] Linking static target lib/librte_ring.a 00:01:42.775 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:42.775 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:42.775 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:42.775 [98/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.775 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:42.775 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.775 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:42.775 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.775 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:42.775 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:42.775 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.775 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:42.775 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.775 [108/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.775 [109/268] Linking static target lib/librte_eal.a 00:01:43.036 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:43.036 [111/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:43.036 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.036 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:43.036 [114/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:43.036 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:43.036 [116/268] Linking static target lib/librte_mempool.a 00:01:43.036 [117/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.036 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:43.036 [119/268] Linking static target lib/librte_meter.a 00:01:43.036 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:43.036 [121/268] Linking static target lib/librte_rcu.a 00:01:43.036 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:43.036 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:43.036 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:43.036 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:43.036 [126/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.036 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:43.036 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:43.341 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:43.341 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:43.341 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:43.341 [132/268] Linking target lib/librte_telemetry.so.24.1 00:01:43.341 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:43.341 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:43.341 [135/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.341 [136/268] Linking static target lib/librte_net.a 00:01:43.341 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.341 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:43.341 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:43.604 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:43.604 [141/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:43.604 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:43.604 [143/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.604 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:43.604 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:43.604 [146/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.604 [147/268] Linking static target lib/librte_cmdline.a 00:01:43.604 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:43.604 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:43.863 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:43.863 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:43.863 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:43.863 [153/268] Linking static target lib/librte_timer.a 00:01:43.863 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:43.863 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.863 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:43.863 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:43.863 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:43.863 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.863 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:43.863 [161/268] Linking static target lib/librte_dmadev.a 00:01:43.863 [162/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:43.863 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:43.863 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.120 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:44.120 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:44.120 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.120 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:44.120 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:44.120 [170/268] Linking static target lib/librte_compressdev.a 00:01:44.120 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.120 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.120 [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.120 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.120 [175/268] Linking static target lib/librte_power.a 00:01:44.120 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.378 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.378 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.378 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.378 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:44.378 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.378 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.378 [183/268] Linking static target lib/librte_hash.a 00:01:44.378 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:44.378 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.378 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.378 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.378 [188/268] Linking static target lib/librte_reorder.a 00:01:44.378 [189/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.378 [190/268] Linking static target lib/librte_mbuf.a 00:01:44.378 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.378 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.378 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.635 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.635 [195/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.635 [196/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.635 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.635 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.635 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.635 [200/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.635 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.635 [202/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.635 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.635 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:44.635 [205/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.635 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.635 [207/268] Linking static target lib/librte_security.a 00:01:44.635 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.635 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.636 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.636 [211/268] Linking static target drivers/librte_bus_vdev.a 00:01:44.636 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.636 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:44.893 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.893 [215/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.893 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.893 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:44.893 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.893 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.893 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.151 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.151 [222/268] Linking static target lib/librte_ethdev.a 00:01:45.151 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.151 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.151 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.151 [226/268] Linking static target lib/librte_cryptodev.a 00:01:46.526 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.093 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.624 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.624 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.624 [231/268] Linking target lib/librte_eal.so.24.1 00:01:49.625 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:49.625 [233/268] Linking target lib/librte_ring.so.24.1 00:01:49.625 [234/268] Linking target lib/librte_pci.so.24.1 00:01:49.625 [235/268] Linking target lib/librte_dmadev.so.24.1 00:01:49.625 [236/268] Linking target lib/librte_meter.so.24.1 00:01:49.625 [237/268] Linking target lib/librte_timer.so.24.1 00:01:49.625 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:49.625 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:49.625 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:49.625 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:49.625 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:49.625 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:49.625 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:49.625 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:49.625 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:49.625 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:49.625 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:49.883 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:49.883 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:49.883 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:49.883 [252/268] Linking target lib/librte_net.so.24.1 00:01:49.883 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:49.883 [254/268] Linking target lib/librte_reorder.so.24.1 00:01:49.883 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:50.141 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:50.141 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:50.141 [258/268] Linking target lib/librte_security.so.24.1 00:01:50.141 [259/268] Linking target lib/librte_hash.so.24.1 00:01:50.141 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:50.141 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:50.141 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:50.141 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:50.400 [264/268] Linking target lib/librte_power.so.24.1 00:01:52.992 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:52.992 [266/268] Linking static target lib/librte_vhost.a 00:01:54.388 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.388 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:54.388 INFO: autodetecting backend as ninja 00:01:54.388 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:55.324 CC lib/ut_mock/mock.o 00:01:55.324 CC lib/log/log.o 00:01:55.324 CC lib/ut/ut.o 00:01:55.324 CC lib/log/log_flags.o 00:01:55.324 CC lib/log/log_deprecated.o 00:01:55.324 LIB libspdk_log.a 00:01:55.324 LIB libspdk_ut.a 00:01:55.325 LIB libspdk_ut_mock.a 00:01:55.325 SO libspdk_ut.so.2.0 00:01:55.325 SO libspdk_ut_mock.so.6.0 00:01:55.325 SO libspdk_log.so.7.0 00:01:55.325 SYMLINK libspdk_ut.so 00:01:55.325 SYMLINK libspdk_ut_mock.so 00:01:55.325 SYMLINK libspdk_log.so 00:01:55.583 CC lib/ioat/ioat.o 00:01:55.583 CC lib/dma/dma.o 00:01:55.583 CXX lib/trace_parser/trace.o 00:01:55.583 CC lib/util/base64.o 00:01:55.583 CC lib/util/bit_array.o 00:01:55.583 CC lib/util/cpuset.o 00:01:55.583 CC lib/util/crc16.o 00:01:55.583 CC lib/util/crc32.o 00:01:55.583 CC lib/util/crc32c.o 00:01:55.583 CC lib/util/crc32_ieee.o 00:01:55.583 CC lib/util/crc64.o 00:01:55.583 CC lib/util/dif.o 00:01:55.583 CC lib/util/fd.o 00:01:55.583 CC lib/util/fd_group.o 00:01:55.583 CC lib/util/file.o 00:01:55.583 CC lib/util/hexlify.o 00:01:55.583 CC lib/util/iov.o 00:01:55.583 CC lib/util/math.o 00:01:55.583 CC lib/util/net.o 00:01:55.583 CC lib/util/pipe.o 00:01:55.583 CC lib/util/strerror_tls.o 00:01:55.583 CC lib/util/string.o 00:01:55.583 CC lib/util/uuid.o 00:01:55.583 CC lib/util/xor.o 00:01:55.583 CC lib/util/zipf.o 00:01:55.583 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.583 CC lib/vfio_user/host/vfio_user.o 00:01:55.842 LIB libspdk_dma.a 00:01:55.842 SO libspdk_dma.so.4.0 00:01:55.842 SYMLINK libspdk_dma.so 00:01:55.842 LIB libspdk_ioat.a 00:01:55.842 SO libspdk_ioat.so.7.0 00:01:55.842 SYMLINK libspdk_ioat.so 00:01:55.842 LIB libspdk_vfio_user.a 00:01:55.842 SO libspdk_vfio_user.so.5.0 00:01:56.102 SYMLINK libspdk_vfio_user.so 00:01:56.102 LIB libspdk_util.a 00:01:56.102 SO libspdk_util.so.9.1 00:01:56.362 SYMLINK libspdk_util.so 00:01:56.362 CC lib/rdma_utils/rdma_utils.o 00:01:56.362 CC lib/conf/conf.o 00:01:56.362 CC lib/vmd/vmd.o 00:01:56.362 CC lib/rdma_provider/common.o 00:01:56.362 CC lib/vmd/led.o 00:01:56.362 CC lib/env_dpdk/env.o 00:01:56.362 CC lib/env_dpdk/memory.o 00:01:56.362 CC lib/json/json_parse.o 00:01:56.362 CC lib/env_dpdk/pci.o 00:01:56.362 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.362 CC lib/json/json_util.o 00:01:56.362 CC lib/env_dpdk/init.o 00:01:56.362 CC lib/idxd/idxd.o 00:01:56.362 CC lib/env_dpdk/threads.o 00:01:56.362 CC lib/json/json_write.o 00:01:56.362 CC lib/idxd/idxd_user.o 00:01:56.362 CC lib/env_dpdk/pci_ioat.o 00:01:56.362 CC lib/idxd/idxd_kernel.o 00:01:56.362 CC lib/env_dpdk/pci_virtio.o 00:01:56.362 CC lib/env_dpdk/pci_vmd.o 00:01:56.362 CC lib/env_dpdk/pci_idxd.o 00:01:56.362 CC lib/env_dpdk/pci_event.o 00:01:56.362 CC lib/env_dpdk/sigbus_handler.o 00:01:56.362 CC lib/env_dpdk/pci_dpdk.o 00:01:56.362 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:56.362 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:56.362 LIB libspdk_trace_parser.a 00:01:56.362 SO libspdk_trace_parser.so.5.0 00:01:56.621 SYMLINK libspdk_trace_parser.so 00:01:56.621 LIB libspdk_rdma_provider.a 00:01:56.879 LIB libspdk_rdma_utils.a 00:01:56.879 SO libspdk_rdma_provider.so.6.0 00:01:56.879 LIB libspdk_json.a 00:01:56.879 SO libspdk_rdma_utils.so.1.0 00:01:56.879 LIB libspdk_conf.a 00:01:56.879 SO libspdk_json.so.6.0 00:01:56.879 SO libspdk_conf.so.6.0 00:01:56.879 SYMLINK libspdk_rdma_provider.so 00:01:56.879 SYMLINK libspdk_rdma_utils.so 00:01:56.879 SYMLINK libspdk_conf.so 00:01:56.879 SYMLINK libspdk_json.so 00:01:57.138 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.138 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.138 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.138 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:57.138 LIB libspdk_idxd.a 00:01:57.138 SO libspdk_idxd.so.12.0 00:01:57.138 SYMLINK libspdk_idxd.so 00:01:57.138 LIB libspdk_vmd.a 00:01:57.138 SO libspdk_vmd.so.6.0 00:01:57.138 SYMLINK libspdk_vmd.so 00:01:57.396 LIB libspdk_jsonrpc.a 00:01:57.396 SO libspdk_jsonrpc.so.6.0 00:01:57.396 SYMLINK libspdk_jsonrpc.so 00:01:57.656 CC lib/rpc/rpc.o 00:01:57.656 LIB libspdk_rpc.a 00:01:57.913 SO libspdk_rpc.so.6.0 00:01:57.913 SYMLINK libspdk_rpc.so 00:01:57.913 CC lib/trace/trace.o 00:01:57.913 CC lib/trace/trace_flags.o 00:01:57.913 CC lib/trace/trace_rpc.o 00:01:57.913 CC lib/notify/notify.o 00:01:57.913 CC lib/keyring/keyring.o 00:01:57.913 CC lib/notify/notify_rpc.o 00:01:57.913 CC lib/keyring/keyring_rpc.o 00:01:58.171 LIB libspdk_notify.a 00:01:58.171 SO libspdk_notify.so.6.0 00:01:58.171 LIB libspdk_keyring.a 00:01:58.171 SYMLINK libspdk_notify.so 00:01:58.171 LIB libspdk_trace.a 00:01:58.171 SO libspdk_keyring.so.1.0 00:01:58.171 SO libspdk_trace.so.10.0 00:01:58.428 SYMLINK libspdk_keyring.so 00:01:58.428 SYMLINK libspdk_trace.so 00:01:58.428 LIB libspdk_env_dpdk.a 00:01:58.428 SO libspdk_env_dpdk.so.15.0 00:01:58.428 CC lib/thread/thread.o 00:01:58.428 CC lib/thread/iobuf.o 00:01:58.428 CC lib/sock/sock.o 00:01:58.428 CC lib/sock/sock_rpc.o 00:01:58.686 SYMLINK libspdk_env_dpdk.so 00:01:58.945 LIB libspdk_sock.a 00:01:58.945 SO libspdk_sock.so.10.0 00:01:58.945 SYMLINK libspdk_sock.so 00:01:59.203 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.203 CC lib/nvme/nvme_ctrlr.o 00:01:59.203 CC lib/nvme/nvme_fabric.o 00:01:59.203 CC lib/nvme/nvme_ns_cmd.o 00:01:59.203 CC lib/nvme/nvme_ns.o 00:01:59.203 CC lib/nvme/nvme_pcie_common.o 00:01:59.203 CC lib/nvme/nvme_pcie.o 00:01:59.203 CC lib/nvme/nvme_qpair.o 00:01:59.203 CC lib/nvme/nvme.o 00:01:59.203 CC lib/nvme/nvme_quirks.o 00:01:59.203 CC lib/nvme/nvme_transport.o 00:01:59.203 CC lib/nvme/nvme_discovery.o 00:01:59.203 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.203 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.203 CC lib/nvme/nvme_tcp.o 00:01:59.203 CC lib/nvme/nvme_opal.o 00:01:59.203 CC lib/nvme/nvme_io_msg.o 00:01:59.203 CC lib/nvme/nvme_poll_group.o 00:01:59.203 CC lib/nvme/nvme_zns.o 00:01:59.203 CC lib/nvme/nvme_stubs.o 00:01:59.203 CC lib/nvme/nvme_auth.o 00:01:59.203 CC lib/nvme/nvme_vfio_user.o 00:01:59.203 CC lib/nvme/nvme_cuse.o 00:01:59.203 CC lib/nvme/nvme_rdma.o 00:02:00.137 LIB libspdk_thread.a 00:02:00.137 SO libspdk_thread.so.10.1 00:02:00.137 SYMLINK libspdk_thread.so 00:02:00.394 CC lib/init/json_config.o 00:02:00.395 CC lib/vfu_tgt/tgt_endpoint.o 00:02:00.395 CC lib/virtio/virtio.o 00:02:00.395 CC lib/accel/accel.o 00:02:00.395 CC lib/blob/blobstore.o 00:02:00.395 CC lib/init/subsystem.o 00:02:00.395 CC lib/vfu_tgt/tgt_rpc.o 00:02:00.395 CC lib/virtio/virtio_vhost_user.o 00:02:00.395 CC lib/accel/accel_rpc.o 00:02:00.395 CC lib/blob/request.o 00:02:00.395 CC lib/init/subsystem_rpc.o 00:02:00.395 CC lib/virtio/virtio_vfio_user.o 00:02:00.395 CC lib/init/rpc.o 00:02:00.395 CC lib/accel/accel_sw.o 00:02:00.395 CC lib/blob/zeroes.o 00:02:00.395 CC lib/virtio/virtio_pci.o 00:02:00.395 CC lib/blob/blob_bs_dev.o 00:02:00.653 LIB libspdk_init.a 00:02:00.653 SO libspdk_init.so.5.0 00:02:00.653 LIB libspdk_virtio.a 00:02:00.653 LIB libspdk_vfu_tgt.a 00:02:00.653 SYMLINK libspdk_init.so 00:02:00.653 SO libspdk_virtio.so.7.0 00:02:00.653 SO libspdk_vfu_tgt.so.3.0 00:02:00.910 SYMLINK libspdk_vfu_tgt.so 00:02:00.910 SYMLINK libspdk_virtio.so 00:02:00.910 CC lib/event/app.o 00:02:00.910 CC lib/event/reactor.o 00:02:00.910 CC lib/event/log_rpc.o 00:02:00.910 CC lib/event/app_rpc.o 00:02:00.910 CC lib/event/scheduler_static.o 00:02:01.475 LIB libspdk_event.a 00:02:01.475 SO libspdk_event.so.14.0 00:02:01.475 LIB libspdk_accel.a 00:02:01.475 SYMLINK libspdk_event.so 00:02:01.475 SO libspdk_accel.so.15.1 00:02:01.475 SYMLINK libspdk_accel.so 00:02:01.475 LIB libspdk_nvme.a 00:02:01.732 CC lib/bdev/bdev.o 00:02:01.732 CC lib/bdev/bdev_rpc.o 00:02:01.732 CC lib/bdev/bdev_zone.o 00:02:01.732 CC lib/bdev/part.o 00:02:01.732 CC lib/bdev/scsi_nvme.o 00:02:01.732 SO libspdk_nvme.so.13.1 00:02:01.989 SYMLINK libspdk_nvme.so 00:02:03.361 LIB libspdk_blob.a 00:02:03.361 SO libspdk_blob.so.11.0 00:02:03.361 SYMLINK libspdk_blob.so 00:02:03.619 CC lib/blobfs/blobfs.o 00:02:03.619 CC lib/blobfs/tree.o 00:02:03.619 CC lib/lvol/lvol.o 00:02:04.186 LIB libspdk_bdev.a 00:02:04.186 SO libspdk_bdev.so.15.1 00:02:04.453 SYMLINK libspdk_bdev.so 00:02:04.453 CC lib/scsi/dev.o 00:02:04.453 CC lib/nvmf/ctrlr.o 00:02:04.453 CC lib/nbd/nbd.o 00:02:04.453 CC lib/ublk/ublk.o 00:02:04.454 CC lib/scsi/lun.o 00:02:04.454 CC lib/nvmf/ctrlr_discovery.o 00:02:04.454 CC lib/ublk/ublk_rpc.o 00:02:04.454 CC lib/ftl/ftl_core.o 00:02:04.454 CC lib/scsi/port.o 00:02:04.454 CC lib/nvmf/ctrlr_bdev.o 00:02:04.454 CC lib/nbd/nbd_rpc.o 00:02:04.454 CC lib/ftl/ftl_init.o 00:02:04.454 CC lib/scsi/scsi.o 00:02:04.454 CC lib/nvmf/subsystem.o 00:02:04.454 CC lib/scsi/scsi_bdev.o 00:02:04.454 CC lib/nvmf/nvmf.o 00:02:04.454 CC lib/ftl/ftl_layout.o 00:02:04.454 CC lib/ftl/ftl_debug.o 00:02:04.454 CC lib/nvmf/nvmf_rpc.o 00:02:04.454 CC lib/scsi/scsi_pr.o 00:02:04.454 CC lib/nvmf/transport.o 00:02:04.454 CC lib/ftl/ftl_io.o 00:02:04.454 CC lib/nvmf/tcp.o 00:02:04.454 CC lib/ftl/ftl_sb.o 00:02:04.454 CC lib/scsi/task.o 00:02:04.454 CC lib/scsi/scsi_rpc.o 00:02:04.454 CC lib/ftl/ftl_l2p.o 00:02:04.454 CC lib/nvmf/stubs.o 00:02:04.454 CC lib/nvmf/mdns_server.o 00:02:04.454 CC lib/ftl/ftl_l2p_flat.o 00:02:04.454 CC lib/nvmf/vfio_user.o 00:02:04.454 CC lib/nvmf/rdma.o 00:02:04.454 CC lib/ftl/ftl_nv_cache.o 00:02:04.454 CC lib/ftl/ftl_band.o 00:02:04.454 CC lib/nvmf/auth.o 00:02:04.454 CC lib/ftl/ftl_band_ops.o 00:02:04.454 CC lib/ftl/ftl_writer.o 00:02:04.454 CC lib/ftl/ftl_rq.o 00:02:04.454 CC lib/ftl/ftl_reloc.o 00:02:04.454 CC lib/ftl/ftl_l2p_cache.o 00:02:04.454 CC lib/ftl/ftl_p2l.o 00:02:04.454 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.454 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.454 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.454 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.454 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.454 LIB libspdk_blobfs.a 00:02:04.454 LIB libspdk_lvol.a 00:02:04.454 SO libspdk_lvol.so.10.0 00:02:04.713 SO libspdk_blobfs.so.10.0 00:02:04.713 SYMLINK libspdk_lvol.so 00:02:04.713 SYMLINK libspdk_blobfs.so 00:02:04.713 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.713 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.713 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.975 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.975 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.975 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.975 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.975 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.975 CC lib/ftl/utils/ftl_conf.o 00:02:04.975 CC lib/ftl/utils/ftl_md.o 00:02:04.975 CC lib/ftl/utils/ftl_mempool.o 00:02:04.975 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.975 CC lib/ftl/utils/ftl_property.o 00:02:04.975 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.975 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.975 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.975 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.975 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.975 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.975 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.975 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:05.245 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:05.245 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:05.245 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:05.245 CC lib/ftl/base/ftl_base_dev.o 00:02:05.245 CC lib/ftl/base/ftl_base_bdev.o 00:02:05.245 CC lib/ftl/ftl_trace.o 00:02:05.245 LIB libspdk_nbd.a 00:02:05.245 SO libspdk_nbd.so.7.0 00:02:05.548 SYMLINK libspdk_nbd.so 00:02:05.548 LIB libspdk_scsi.a 00:02:05.548 SO libspdk_scsi.so.9.0 00:02:05.548 SYMLINK libspdk_scsi.so 00:02:05.548 LIB libspdk_ublk.a 00:02:05.548 SO libspdk_ublk.so.3.0 00:02:05.807 SYMLINK libspdk_ublk.so 00:02:05.807 CC lib/iscsi/conn.o 00:02:05.807 CC lib/vhost/vhost.o 00:02:05.807 CC lib/iscsi/init_grp.o 00:02:05.807 CC lib/vhost/vhost_rpc.o 00:02:05.807 CC lib/vhost/vhost_scsi.o 00:02:05.807 CC lib/iscsi/iscsi.o 00:02:05.807 CC lib/vhost/vhost_blk.o 00:02:05.807 CC lib/iscsi/md5.o 00:02:05.807 CC lib/vhost/rte_vhost_user.o 00:02:05.807 CC lib/iscsi/param.o 00:02:05.807 CC lib/iscsi/portal_grp.o 00:02:05.807 CC lib/iscsi/tgt_node.o 00:02:05.807 CC lib/iscsi/iscsi_subsystem.o 00:02:05.807 CC lib/iscsi/iscsi_rpc.o 00:02:05.807 CC lib/iscsi/task.o 00:02:06.066 LIB libspdk_ftl.a 00:02:06.066 SO libspdk_ftl.so.9.0 00:02:06.633 SYMLINK libspdk_ftl.so 00:02:06.891 LIB libspdk_vhost.a 00:02:06.891 SO libspdk_vhost.so.8.0 00:02:07.150 LIB libspdk_nvmf.a 00:02:07.150 SYMLINK libspdk_vhost.so 00:02:07.150 LIB libspdk_iscsi.a 00:02:07.150 SO libspdk_nvmf.so.19.0 00:02:07.150 SO libspdk_iscsi.so.8.0 00:02:07.408 SYMLINK libspdk_iscsi.so 00:02:07.408 SYMLINK libspdk_nvmf.so 00:02:07.667 CC module/vfu_device/vfu_virtio.o 00:02:07.667 CC module/vfu_device/vfu_virtio_blk.o 00:02:07.667 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.667 CC module/vfu_device/vfu_virtio_scsi.o 00:02:07.667 CC module/vfu_device/vfu_virtio_rpc.o 00:02:07.667 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.667 CC module/blob/bdev/blob_bdev.o 00:02:07.667 CC module/accel/error/accel_error.o 00:02:07.667 CC module/accel/dsa/accel_dsa.o 00:02:07.667 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.667 CC module/accel/error/accel_error_rpc.o 00:02:07.667 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.667 CC module/sock/posix/posix.o 00:02:07.667 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.667 CC module/accel/iaa/accel_iaa.o 00:02:07.667 CC module/keyring/file/keyring.o 00:02:07.667 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.667 CC module/keyring/file/keyring_rpc.o 00:02:07.667 CC module/accel/ioat/accel_ioat.o 00:02:07.667 CC module/keyring/linux/keyring.o 00:02:07.667 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.667 CC module/keyring/linux/keyring_rpc.o 00:02:07.667 LIB libspdk_env_dpdk_rpc.a 00:02:07.667 SO libspdk_env_dpdk_rpc.so.6.0 00:02:07.925 SYMLINK libspdk_env_dpdk_rpc.so 00:02:07.925 LIB libspdk_keyring_linux.a 00:02:07.925 LIB libspdk_keyring_file.a 00:02:07.925 LIB libspdk_scheduler_gscheduler.a 00:02:07.925 SO libspdk_keyring_linux.so.1.0 00:02:07.925 SO libspdk_keyring_file.so.1.0 00:02:07.925 SO libspdk_scheduler_gscheduler.so.4.0 00:02:07.925 LIB libspdk_accel_error.a 00:02:07.925 LIB libspdk_accel_ioat.a 00:02:07.925 LIB libspdk_scheduler_dynamic.a 00:02:07.925 LIB libspdk_accel_iaa.a 00:02:07.925 SO libspdk_accel_error.so.2.0 00:02:07.925 SO libspdk_scheduler_dynamic.so.4.0 00:02:07.925 SO libspdk_accel_ioat.so.6.0 00:02:07.925 SYMLINK libspdk_scheduler_gscheduler.so 00:02:07.925 SYMLINK libspdk_keyring_linux.so 00:02:07.925 SYMLINK libspdk_keyring_file.so 00:02:07.925 SO libspdk_accel_iaa.so.3.0 00:02:07.925 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.925 LIB libspdk_accel_dsa.a 00:02:07.925 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:07.925 SYMLINK libspdk_accel_error.so 00:02:07.925 SYMLINK libspdk_scheduler_dynamic.so 00:02:07.925 LIB libspdk_blob_bdev.a 00:02:07.925 SYMLINK libspdk_accel_ioat.so 00:02:07.925 SO libspdk_accel_dsa.so.5.0 00:02:07.925 SYMLINK libspdk_accel_iaa.so 00:02:07.925 SO libspdk_blob_bdev.so.11.0 00:02:07.925 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:08.183 SYMLINK libspdk_accel_dsa.so 00:02:08.183 SYMLINK libspdk_blob_bdev.so 00:02:08.183 LIB libspdk_vfu_device.a 00:02:08.442 SO libspdk_vfu_device.so.3.0 00:02:08.442 CC module/bdev/gpt/gpt.o 00:02:08.442 CC module/bdev/error/vbdev_error.o 00:02:08.442 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.442 CC module/bdev/delay/vbdev_delay.o 00:02:08.442 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.442 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.443 CC module/bdev/malloc/bdev_malloc.o 00:02:08.443 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.443 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.443 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.443 CC module/bdev/nvme/bdev_nvme.o 00:02:08.443 CC module/bdev/split/vbdev_split.o 00:02:08.443 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.443 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.443 CC module/bdev/nvme/nvme_rpc.o 00:02:08.443 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.443 CC module/bdev/null/bdev_null.o 00:02:08.443 CC module/bdev/null/bdev_null_rpc.o 00:02:08.443 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.443 CC module/bdev/aio/bdev_aio.o 00:02:08.443 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.443 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.443 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.443 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.443 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.443 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.443 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.443 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.443 CC module/bdev/ftl/bdev_ftl.o 00:02:08.443 CC module/bdev/nvme/vbdev_opal.o 00:02:08.443 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.443 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.443 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.443 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.443 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.443 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.443 CC module/bdev/raid/bdev_raid.o 00:02:08.443 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.443 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.443 CC module/bdev/raid/raid0.o 00:02:08.443 CC module/bdev/raid/raid1.o 00:02:08.443 CC module/bdev/raid/concat.o 00:02:08.443 SYMLINK libspdk_vfu_device.so 00:02:08.704 LIB libspdk_sock_posix.a 00:02:08.704 SO libspdk_sock_posix.so.6.0 00:02:08.704 LIB libspdk_blobfs_bdev.a 00:02:08.704 SYMLINK libspdk_sock_posix.so 00:02:08.704 LIB libspdk_bdev_null.a 00:02:08.704 SO libspdk_blobfs_bdev.so.6.0 00:02:08.704 SO libspdk_bdev_null.so.6.0 00:02:08.704 LIB libspdk_bdev_error.a 00:02:08.704 SYMLINK libspdk_blobfs_bdev.so 00:02:08.704 SYMLINK libspdk_bdev_null.so 00:02:08.704 LIB libspdk_bdev_split.a 00:02:08.704 SO libspdk_bdev_error.so.6.0 00:02:08.704 LIB libspdk_bdev_ftl.a 00:02:08.962 LIB libspdk_bdev_zone_block.a 00:02:08.962 SO libspdk_bdev_split.so.6.0 00:02:08.962 LIB libspdk_bdev_malloc.a 00:02:08.962 LIB libspdk_bdev_gpt.a 00:02:08.962 SO libspdk_bdev_ftl.so.6.0 00:02:08.962 SO libspdk_bdev_zone_block.so.6.0 00:02:08.962 SO libspdk_bdev_malloc.so.6.0 00:02:08.962 SO libspdk_bdev_gpt.so.6.0 00:02:08.962 SYMLINK libspdk_bdev_error.so 00:02:08.962 LIB libspdk_bdev_passthru.a 00:02:08.962 SYMLINK libspdk_bdev_split.so 00:02:08.962 LIB libspdk_bdev_iscsi.a 00:02:08.962 SYMLINK libspdk_bdev_ftl.so 00:02:08.962 SO libspdk_bdev_passthru.so.6.0 00:02:08.962 SYMLINK libspdk_bdev_zone_block.so 00:02:08.962 SYMLINK libspdk_bdev_gpt.so 00:02:08.962 SYMLINK libspdk_bdev_malloc.so 00:02:08.962 SO libspdk_bdev_iscsi.so.6.0 00:02:08.962 LIB libspdk_bdev_aio.a 00:02:08.962 LIB libspdk_bdev_delay.a 00:02:08.962 SO libspdk_bdev_aio.so.6.0 00:02:08.962 SYMLINK libspdk_bdev_passthru.so 00:02:08.962 SO libspdk_bdev_delay.so.6.0 00:02:08.962 SYMLINK libspdk_bdev_iscsi.so 00:02:08.962 SYMLINK libspdk_bdev_aio.so 00:02:08.962 LIB libspdk_bdev_virtio.a 00:02:08.962 SYMLINK libspdk_bdev_delay.so 00:02:08.962 SO libspdk_bdev_virtio.so.6.0 00:02:09.219 LIB libspdk_bdev_lvol.a 00:02:09.219 SYMLINK libspdk_bdev_virtio.so 00:02:09.219 SO libspdk_bdev_lvol.so.6.0 00:02:09.219 SYMLINK libspdk_bdev_lvol.so 00:02:09.477 LIB libspdk_bdev_raid.a 00:02:09.477 SO libspdk_bdev_raid.so.6.0 00:02:09.735 SYMLINK libspdk_bdev_raid.so 00:02:10.675 LIB libspdk_bdev_nvme.a 00:02:10.675 SO libspdk_bdev_nvme.so.7.0 00:02:10.675 SYMLINK libspdk_bdev_nvme.so 00:02:11.241 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.241 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.241 CC module/event/subsystems/keyring/keyring.o 00:02:11.241 CC module/event/subsystems/vmd/vmd.o 00:02:11.241 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.241 CC module/event/subsystems/sock/sock.o 00:02:11.241 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:11.241 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:11.241 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.241 LIB libspdk_event_keyring.a 00:02:11.241 LIB libspdk_event_vhost_blk.a 00:02:11.241 LIB libspdk_event_scheduler.a 00:02:11.241 LIB libspdk_event_vfu_tgt.a 00:02:11.241 LIB libspdk_event_vmd.a 00:02:11.241 LIB libspdk_event_sock.a 00:02:11.241 LIB libspdk_event_iobuf.a 00:02:11.241 SO libspdk_event_keyring.so.1.0 00:02:11.241 SO libspdk_event_scheduler.so.4.0 00:02:11.241 SO libspdk_event_vhost_blk.so.3.0 00:02:11.241 SO libspdk_event_vfu_tgt.so.3.0 00:02:11.241 SO libspdk_event_vmd.so.6.0 00:02:11.241 SO libspdk_event_sock.so.5.0 00:02:11.241 SO libspdk_event_iobuf.so.3.0 00:02:11.241 SYMLINK libspdk_event_keyring.so 00:02:11.241 SYMLINK libspdk_event_vhost_blk.so 00:02:11.241 SYMLINK libspdk_event_scheduler.so 00:02:11.241 SYMLINK libspdk_event_vfu_tgt.so 00:02:11.241 SYMLINK libspdk_event_sock.so 00:02:11.241 SYMLINK libspdk_event_vmd.so 00:02:11.498 SYMLINK libspdk_event_iobuf.so 00:02:11.498 CC module/event/subsystems/accel/accel.o 00:02:11.756 LIB libspdk_event_accel.a 00:02:11.756 SO libspdk_event_accel.so.6.0 00:02:11.756 SYMLINK libspdk_event_accel.so 00:02:12.014 CC module/event/subsystems/bdev/bdev.o 00:02:12.014 LIB libspdk_event_bdev.a 00:02:12.273 SO libspdk_event_bdev.so.6.0 00:02:12.273 SYMLINK libspdk_event_bdev.so 00:02:12.273 CC module/event/subsystems/ublk/ublk.o 00:02:12.273 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.273 CC module/event/subsystems/nbd/nbd.o 00:02:12.273 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.273 CC module/event/subsystems/scsi/scsi.o 00:02:12.530 LIB libspdk_event_nbd.a 00:02:12.530 LIB libspdk_event_ublk.a 00:02:12.530 LIB libspdk_event_scsi.a 00:02:12.530 SO libspdk_event_ublk.so.3.0 00:02:12.530 SO libspdk_event_nbd.so.6.0 00:02:12.530 SO libspdk_event_scsi.so.6.0 00:02:12.530 SYMLINK libspdk_event_nbd.so 00:02:12.530 SYMLINK libspdk_event_ublk.so 00:02:12.530 SYMLINK libspdk_event_scsi.so 00:02:12.530 LIB libspdk_event_nvmf.a 00:02:12.530 SO libspdk_event_nvmf.so.6.0 00:02:12.789 SYMLINK libspdk_event_nvmf.so 00:02:12.789 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.789 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.789 LIB libspdk_event_vhost_scsi.a 00:02:13.047 SO libspdk_event_vhost_scsi.so.3.0 00:02:13.047 LIB libspdk_event_iscsi.a 00:02:13.047 SO libspdk_event_iscsi.so.6.0 00:02:13.047 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.047 SYMLINK libspdk_event_iscsi.so 00:02:13.047 SO libspdk.so.6.0 00:02:13.047 SYMLINK libspdk.so 00:02:13.316 CC app/trace_record/trace_record.o 00:02:13.316 CXX app/trace/trace.o 00:02:13.316 CC app/spdk_lspci/spdk_lspci.o 00:02:13.316 CC app/spdk_nvme_identify/identify.o 00:02:13.316 TEST_HEADER include/spdk/accel_module.h 00:02:13.316 TEST_HEADER include/spdk/accel.h 00:02:13.316 CC app/spdk_nvme_perf/perf.o 00:02:13.316 TEST_HEADER include/spdk/assert.h 00:02:13.316 CC app/spdk_nvme_discover/discovery_aer.o 00:02:13.316 TEST_HEADER include/spdk/barrier.h 00:02:13.316 TEST_HEADER include/spdk/base64.h 00:02:13.316 TEST_HEADER include/spdk/bdev.h 00:02:13.316 TEST_HEADER include/spdk/bdev_module.h 00:02:13.316 TEST_HEADER include/spdk/bdev_zone.h 00:02:13.316 CC test/rpc_client/rpc_client_test.o 00:02:13.316 TEST_HEADER include/spdk/bit_array.h 00:02:13.316 CC app/spdk_top/spdk_top.o 00:02:13.316 TEST_HEADER include/spdk/bit_pool.h 00:02:13.316 TEST_HEADER include/spdk/blob_bdev.h 00:02:13.316 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:13.316 TEST_HEADER include/spdk/blobfs.h 00:02:13.316 TEST_HEADER include/spdk/blob.h 00:02:13.316 TEST_HEADER include/spdk/conf.h 00:02:13.316 TEST_HEADER include/spdk/config.h 00:02:13.316 TEST_HEADER include/spdk/cpuset.h 00:02:13.316 TEST_HEADER include/spdk/crc16.h 00:02:13.316 TEST_HEADER include/spdk/crc32.h 00:02:13.316 TEST_HEADER include/spdk/crc64.h 00:02:13.316 TEST_HEADER include/spdk/dif.h 00:02:13.316 TEST_HEADER include/spdk/dma.h 00:02:13.316 TEST_HEADER include/spdk/endian.h 00:02:13.316 TEST_HEADER include/spdk/env.h 00:02:13.316 TEST_HEADER include/spdk/env_dpdk.h 00:02:13.316 TEST_HEADER include/spdk/event.h 00:02:13.316 TEST_HEADER include/spdk/fd_group.h 00:02:13.316 TEST_HEADER include/spdk/fd.h 00:02:13.316 TEST_HEADER include/spdk/file.h 00:02:13.316 TEST_HEADER include/spdk/ftl.h 00:02:13.316 TEST_HEADER include/spdk/gpt_spec.h 00:02:13.316 TEST_HEADER include/spdk/hexlify.h 00:02:13.316 TEST_HEADER include/spdk/histogram_data.h 00:02:13.316 TEST_HEADER include/spdk/idxd.h 00:02:13.316 TEST_HEADER include/spdk/idxd_spec.h 00:02:13.316 TEST_HEADER include/spdk/init.h 00:02:13.316 TEST_HEADER include/spdk/ioat.h 00:02:13.316 TEST_HEADER include/spdk/ioat_spec.h 00:02:13.316 TEST_HEADER include/spdk/json.h 00:02:13.316 TEST_HEADER include/spdk/iscsi_spec.h 00:02:13.316 TEST_HEADER include/spdk/jsonrpc.h 00:02:13.316 TEST_HEADER include/spdk/keyring.h 00:02:13.316 TEST_HEADER include/spdk/keyring_module.h 00:02:13.316 TEST_HEADER include/spdk/likely.h 00:02:13.316 TEST_HEADER include/spdk/log.h 00:02:13.316 TEST_HEADER include/spdk/lvol.h 00:02:13.316 TEST_HEADER include/spdk/memory.h 00:02:13.316 TEST_HEADER include/spdk/mmio.h 00:02:13.316 TEST_HEADER include/spdk/nbd.h 00:02:13.316 TEST_HEADER include/spdk/net.h 00:02:13.316 TEST_HEADER include/spdk/nvme.h 00:02:13.316 TEST_HEADER include/spdk/notify.h 00:02:13.316 TEST_HEADER include/spdk/nvme_intel.h 00:02:13.316 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:13.316 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:13.316 TEST_HEADER include/spdk/nvme_spec.h 00:02:13.316 TEST_HEADER include/spdk/nvme_zns.h 00:02:13.316 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:13.316 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:13.316 TEST_HEADER include/spdk/nvmf.h 00:02:13.316 TEST_HEADER include/spdk/nvmf_spec.h 00:02:13.316 TEST_HEADER include/spdk/nvmf_transport.h 00:02:13.316 TEST_HEADER include/spdk/opal.h 00:02:13.316 TEST_HEADER include/spdk/opal_spec.h 00:02:13.316 TEST_HEADER include/spdk/pci_ids.h 00:02:13.316 TEST_HEADER include/spdk/pipe.h 00:02:13.316 TEST_HEADER include/spdk/queue.h 00:02:13.316 TEST_HEADER include/spdk/reduce.h 00:02:13.316 TEST_HEADER include/spdk/rpc.h 00:02:13.316 TEST_HEADER include/spdk/scheduler.h 00:02:13.316 TEST_HEADER include/spdk/scsi.h 00:02:13.316 TEST_HEADER include/spdk/scsi_spec.h 00:02:13.316 TEST_HEADER include/spdk/sock.h 00:02:13.316 TEST_HEADER include/spdk/stdinc.h 00:02:13.316 TEST_HEADER include/spdk/string.h 00:02:13.316 TEST_HEADER include/spdk/thread.h 00:02:13.316 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:13.316 TEST_HEADER include/spdk/trace.h 00:02:13.316 TEST_HEADER include/spdk/trace_parser.h 00:02:13.316 TEST_HEADER include/spdk/tree.h 00:02:13.316 TEST_HEADER include/spdk/ublk.h 00:02:13.316 TEST_HEADER include/spdk/util.h 00:02:13.316 TEST_HEADER include/spdk/uuid.h 00:02:13.316 TEST_HEADER include/spdk/version.h 00:02:13.316 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:13.316 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:13.316 TEST_HEADER include/spdk/vhost.h 00:02:13.316 TEST_HEADER include/spdk/vmd.h 00:02:13.316 CC app/spdk_dd/spdk_dd.o 00:02:13.316 TEST_HEADER include/spdk/xor.h 00:02:13.316 TEST_HEADER include/spdk/zipf.h 00:02:13.316 CXX test/cpp_headers/accel.o 00:02:13.316 CXX test/cpp_headers/accel_module.o 00:02:13.316 CXX test/cpp_headers/assert.o 00:02:13.316 CXX test/cpp_headers/barrier.o 00:02:13.316 CXX test/cpp_headers/bdev.o 00:02:13.316 CXX test/cpp_headers/base64.o 00:02:13.316 CXX test/cpp_headers/bdev_module.o 00:02:13.316 CXX test/cpp_headers/bdev_zone.o 00:02:13.316 CXX test/cpp_headers/bit_array.o 00:02:13.316 CXX test/cpp_headers/bit_pool.o 00:02:13.316 CXX test/cpp_headers/blob_bdev.o 00:02:13.316 CXX test/cpp_headers/blobfs_bdev.o 00:02:13.316 CXX test/cpp_headers/blobfs.o 00:02:13.316 CXX test/cpp_headers/blob.o 00:02:13.316 CXX test/cpp_headers/conf.o 00:02:13.316 CXX test/cpp_headers/config.o 00:02:13.316 CXX test/cpp_headers/cpuset.o 00:02:13.316 CXX test/cpp_headers/crc16.o 00:02:13.316 CC app/nvmf_tgt/nvmf_main.o 00:02:13.316 CC app/iscsi_tgt/iscsi_tgt.o 00:02:13.316 CXX test/cpp_headers/crc32.o 00:02:13.316 CC examples/ioat/verify/verify.o 00:02:13.316 CC test/thread/poller_perf/poller_perf.o 00:02:13.316 CC test/app/histogram_perf/histogram_perf.o 00:02:13.578 CC test/env/memory/memory_ut.o 00:02:13.578 CC test/env/pci/pci_ut.o 00:02:13.578 CC test/env/vtophys/vtophys.o 00:02:13.578 CC test/app/jsoncat/jsoncat.o 00:02:13.578 CC app/spdk_tgt/spdk_tgt.o 00:02:13.578 CC examples/ioat/perf/perf.o 00:02:13.578 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:13.578 CC app/fio/nvme/fio_plugin.o 00:02:13.578 CC test/app/stub/stub.o 00:02:13.578 CC examples/util/zipf/zipf.o 00:02:13.578 CC test/app/bdev_svc/bdev_svc.o 00:02:13.578 CC test/dma/test_dma/test_dma.o 00:02:13.578 CC app/fio/bdev/fio_plugin.o 00:02:13.578 LINK spdk_lspci 00:02:13.578 CC test/env/mem_callbacks/mem_callbacks.o 00:02:13.578 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:13.841 LINK rpc_client_test 00:02:13.841 LINK spdk_nvme_discover 00:02:13.841 LINK interrupt_tgt 00:02:13.841 LINK vtophys 00:02:13.841 LINK histogram_perf 00:02:13.841 LINK jsoncat 00:02:13.841 LINK poller_perf 00:02:13.841 CXX test/cpp_headers/crc64.o 00:02:13.841 CXX test/cpp_headers/dif.o 00:02:13.841 CXX test/cpp_headers/dma.o 00:02:13.841 CXX test/cpp_headers/endian.o 00:02:13.841 CXX test/cpp_headers/env_dpdk.o 00:02:13.841 CXX test/cpp_headers/env.o 00:02:13.841 LINK zipf 00:02:13.841 CXX test/cpp_headers/event.o 00:02:13.841 CXX test/cpp_headers/fd_group.o 00:02:13.841 LINK env_dpdk_post_init 00:02:13.841 LINK spdk_trace_record 00:02:13.841 CXX test/cpp_headers/fd.o 00:02:13.841 LINK nvmf_tgt 00:02:13.841 CXX test/cpp_headers/file.o 00:02:13.841 CXX test/cpp_headers/ftl.o 00:02:13.841 CXX test/cpp_headers/gpt_spec.o 00:02:13.841 LINK stub 00:02:13.841 CXX test/cpp_headers/hexlify.o 00:02:13.841 LINK iscsi_tgt 00:02:13.841 CXX test/cpp_headers/histogram_data.o 00:02:13.841 LINK bdev_svc 00:02:13.841 LINK verify 00:02:13.841 CXX test/cpp_headers/idxd.o 00:02:13.841 LINK ioat_perf 00:02:13.841 LINK spdk_tgt 00:02:13.841 CXX test/cpp_headers/idxd_spec.o 00:02:13.841 CXX test/cpp_headers/init.o 00:02:14.101 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:14.101 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.101 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:14.101 CXX test/cpp_headers/ioat.o 00:02:14.101 CXX test/cpp_headers/ioat_spec.o 00:02:14.101 CXX test/cpp_headers/iscsi_spec.o 00:02:14.101 CXX test/cpp_headers/json.o 00:02:14.101 CXX test/cpp_headers/jsonrpc.o 00:02:14.101 CXX test/cpp_headers/keyring.o 00:02:14.101 LINK spdk_dd 00:02:14.101 CXX test/cpp_headers/keyring_module.o 00:02:14.101 LINK spdk_trace 00:02:14.101 CXX test/cpp_headers/likely.o 00:02:14.101 CXX test/cpp_headers/log.o 00:02:14.101 CXX test/cpp_headers/lvol.o 00:02:14.367 CXX test/cpp_headers/memory.o 00:02:14.367 LINK pci_ut 00:02:14.367 CXX test/cpp_headers/mmio.o 00:02:14.367 CXX test/cpp_headers/nbd.o 00:02:14.367 CXX test/cpp_headers/net.o 00:02:14.367 CXX test/cpp_headers/notify.o 00:02:14.367 CXX test/cpp_headers/nvme.o 00:02:14.367 CXX test/cpp_headers/nvme_intel.o 00:02:14.367 CXX test/cpp_headers/nvme_ocssd.o 00:02:14.367 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:14.367 LINK test_dma 00:02:14.367 CXX test/cpp_headers/nvme_spec.o 00:02:14.367 CXX test/cpp_headers/nvme_zns.o 00:02:14.367 CXX test/cpp_headers/nvmf_cmd.o 00:02:14.367 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:14.367 CXX test/cpp_headers/nvmf.o 00:02:14.367 CXX test/cpp_headers/nvmf_spec.o 00:02:14.367 CXX test/cpp_headers/nvmf_transport.o 00:02:14.367 CC test/event/event_perf/event_perf.o 00:02:14.367 CC test/event/reactor/reactor.o 00:02:14.367 CC test/event/reactor_perf/reactor_perf.o 00:02:14.367 LINK nvme_fuzz 00:02:14.367 CXX test/cpp_headers/opal.o 00:02:14.367 CC examples/sock/hello_world/hello_sock.o 00:02:14.367 CC test/event/app_repeat/app_repeat.o 00:02:14.367 CXX test/cpp_headers/opal_spec.o 00:02:14.629 CXX test/cpp_headers/pci_ids.o 00:02:14.629 CC examples/vmd/lsvmd/lsvmd.o 00:02:14.629 CC examples/thread/thread/thread_ex.o 00:02:14.629 CXX test/cpp_headers/pipe.o 00:02:14.629 LINK spdk_bdev 00:02:14.629 CC examples/idxd/perf/perf.o 00:02:14.629 CXX test/cpp_headers/queue.o 00:02:14.629 CC test/event/scheduler/scheduler.o 00:02:14.629 CXX test/cpp_headers/reduce.o 00:02:14.629 CXX test/cpp_headers/rpc.o 00:02:14.629 LINK spdk_nvme 00:02:14.629 CXX test/cpp_headers/scheduler.o 00:02:14.629 CXX test/cpp_headers/scsi.o 00:02:14.629 CXX test/cpp_headers/scsi_spec.o 00:02:14.629 CXX test/cpp_headers/stdinc.o 00:02:14.629 CXX test/cpp_headers/sock.o 00:02:14.629 CC examples/vmd/led/led.o 00:02:14.629 CXX test/cpp_headers/string.o 00:02:14.629 CXX test/cpp_headers/thread.o 00:02:14.629 CXX test/cpp_headers/trace.o 00:02:14.629 CXX test/cpp_headers/trace_parser.o 00:02:14.629 CXX test/cpp_headers/tree.o 00:02:14.629 CXX test/cpp_headers/ublk.o 00:02:14.629 LINK reactor_perf 00:02:14.629 LINK reactor 00:02:14.629 LINK event_perf 00:02:14.629 CXX test/cpp_headers/util.o 00:02:14.629 CXX test/cpp_headers/uuid.o 00:02:14.893 CXX test/cpp_headers/version.o 00:02:14.893 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.893 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.893 CXX test/cpp_headers/vhost.o 00:02:14.893 CXX test/cpp_headers/vmd.o 00:02:14.893 LINK app_repeat 00:02:14.893 LINK mem_callbacks 00:02:14.893 CXX test/cpp_headers/xor.o 00:02:14.893 CXX test/cpp_headers/zipf.o 00:02:14.893 LINK lsvmd 00:02:14.893 CC app/vhost/vhost.o 00:02:14.893 LINK vhost_fuzz 00:02:14.893 LINK spdk_nvme_perf 00:02:14.893 LINK spdk_nvme_identify 00:02:14.893 LINK hello_sock 00:02:14.893 LINK led 00:02:14.893 LINK spdk_top 00:02:15.153 LINK thread 00:02:15.153 LINK scheduler 00:02:15.153 CC test/nvme/err_injection/err_injection.o 00:02:15.153 CC test/nvme/sgl/sgl.o 00:02:15.153 CC test/nvme/overhead/overhead.o 00:02:15.153 CC test/nvme/aer/aer.o 00:02:15.153 CC test/nvme/reset/reset.o 00:02:15.153 CC test/accel/dif/dif.o 00:02:15.153 CC test/nvme/e2edp/nvme_dp.o 00:02:15.153 CC test/nvme/startup/startup.o 00:02:15.153 CC test/nvme/reserve/reserve.o 00:02:15.153 CC test/blobfs/mkfs/mkfs.o 00:02:15.153 CC test/nvme/simple_copy/simple_copy.o 00:02:15.153 CC test/nvme/connect_stress/connect_stress.o 00:02:15.153 CC test/nvme/boot_partition/boot_partition.o 00:02:15.153 CC test/nvme/fused_ordering/fused_ordering.o 00:02:15.153 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:15.153 CC test/nvme/compliance/nvme_compliance.o 00:02:15.153 CC test/nvme/fdp/fdp.o 00:02:15.153 CC test/nvme/cuse/cuse.o 00:02:15.153 CC test/lvol/esnap/esnap.o 00:02:15.153 LINK idxd_perf 00:02:15.153 LINK vhost 00:02:15.412 LINK err_injection 00:02:15.412 LINK startup 00:02:15.412 LINK boot_partition 00:02:15.412 LINK fused_ordering 00:02:15.412 LINK simple_copy 00:02:15.412 LINK reset 00:02:15.412 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.412 CC examples/nvme/hello_world/hello_world.o 00:02:15.412 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:15.412 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.412 CC examples/nvme/hotplug/hotplug.o 00:02:15.412 CC examples/nvme/reconnect/reconnect.o 00:02:15.412 CC examples/nvme/abort/abort.o 00:02:15.412 CC examples/nvme/arbitration/arbitration.o 00:02:15.412 LINK connect_stress 00:02:15.412 LINK mkfs 00:02:15.412 LINK nvme_dp 00:02:15.412 LINK aer 00:02:15.412 CC examples/accel/perf/accel_perf.o 00:02:15.412 LINK doorbell_aers 00:02:15.412 LINK reserve 00:02:15.412 CC examples/blob/cli/blobcli.o 00:02:15.412 LINK fdp 00:02:15.670 LINK sgl 00:02:15.670 CC examples/blob/hello_world/hello_blob.o 00:02:15.670 LINK overhead 00:02:15.670 LINK nvme_compliance 00:02:15.670 LINK memory_ut 00:02:15.670 LINK pmr_persistence 00:02:15.670 LINK cmb_copy 00:02:15.670 LINK dif 00:02:15.928 LINK arbitration 00:02:15.928 LINK hello_world 00:02:15.928 LINK hotplug 00:02:15.928 LINK reconnect 00:02:15.928 LINK abort 00:02:15.928 LINK hello_blob 00:02:15.928 LINK accel_perf 00:02:15.928 LINK nvme_manage 00:02:16.187 LINK blobcli 00:02:16.187 CC test/bdev/bdevio/bdevio.o 00:02:16.446 LINK iscsi_fuzz 00:02:16.446 CC examples/bdev/hello_world/hello_bdev.o 00:02:16.446 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.446 LINK bdevio 00:02:16.704 LINK hello_bdev 00:02:16.704 LINK cuse 00:02:17.271 LINK bdevperf 00:02:17.529 CC examples/nvmf/nvmf/nvmf.o 00:02:17.788 LINK nvmf 00:02:21.068 LINK esnap 00:02:21.068 00:02:21.068 real 0m49.709s 00:02:21.068 user 10m6.870s 00:02:21.068 sys 2m27.487s 00:02:21.068 00:38:55 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:21.068 00:38:55 make -- common/autotest_common.sh@10 -- $ set +x 00:02:21.068 ************************************ 00:02:21.068 END TEST make 00:02:21.068 ************************************ 00:02:21.068 00:38:55 -- common/autotest_common.sh@1142 -- $ return 0 00:02:21.068 00:38:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:21.068 00:38:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:21.068 00:38:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:21.068 00:38:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.068 00:38:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:21.068 00:38:55 -- pm/common@44 -- $ pid=2431417 00:02:21.068 00:38:55 -- pm/common@50 -- $ kill -TERM 2431417 00:02:21.068 00:38:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.068 00:38:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:21.068 00:38:55 -- pm/common@44 -- $ pid=2431419 00:02:21.068 00:38:55 -- pm/common@50 -- $ kill -TERM 2431419 00:02:21.068 00:38:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.068 00:38:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:21.068 00:38:55 -- pm/common@44 -- $ pid=2431421 00:02:21.068 00:38:55 -- pm/common@50 -- $ kill -TERM 2431421 00:02:21.068 00:38:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.068 00:38:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:21.068 00:38:55 -- pm/common@44 -- $ pid=2431450 00:02:21.068 00:38:55 -- pm/common@50 -- $ sudo -E kill -TERM 2431450 00:02:21.325 00:38:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:21.326 00:38:55 -- nvmf/common.sh@7 -- # uname -s 00:02:21.326 00:38:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:21.326 00:38:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:21.326 00:38:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:21.326 00:38:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:21.326 00:38:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:21.326 00:38:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:21.326 00:38:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:21.326 00:38:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:21.326 00:38:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:21.326 00:38:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:21.326 00:38:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:21.326 00:38:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:21.326 00:38:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:21.326 00:38:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:21.326 00:38:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:21.326 00:38:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:21.326 00:38:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:21.326 00:38:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:21.326 00:38:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.326 00:38:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.326 00:38:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.326 00:38:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.326 00:38:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.326 00:38:55 -- paths/export.sh@5 -- # export PATH 00:02:21.326 00:38:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.326 00:38:55 -- nvmf/common.sh@47 -- # : 0 00:02:21.326 00:38:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:21.326 00:38:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:21.326 00:38:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:21.326 00:38:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:21.326 00:38:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:21.326 00:38:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:21.326 00:38:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:21.326 00:38:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:21.326 00:38:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:21.326 00:38:55 -- spdk/autotest.sh@32 -- # uname -s 00:02:21.326 00:38:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:21.326 00:38:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:21.326 00:38:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:21.326 00:38:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:21.326 00:38:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:21.326 00:38:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:21.326 00:38:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:21.326 00:38:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:21.326 00:38:55 -- spdk/autotest.sh@48 -- # udevadm_pid=2486917 00:02:21.326 00:38:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:21.326 00:38:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:21.326 00:38:55 -- pm/common@17 -- # local monitor 00:02:21.326 00:38:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.326 00:38:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.326 00:38:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.326 00:38:55 -- pm/common@21 -- # date +%s 00:02:21.326 00:38:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.326 00:38:55 -- pm/common@21 -- # date +%s 00:02:21.326 00:38:55 -- pm/common@25 -- # sleep 1 00:02:21.326 00:38:55 -- pm/common@21 -- # date +%s 00:02:21.326 00:38:55 -- pm/common@21 -- # date +%s 00:02:21.326 00:38:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721083135 00:02:21.326 00:38:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721083135 00:02:21.326 00:38:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721083135 00:02:21.326 00:38:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721083135 00:02:21.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721083135_collect-vmstat.pm.log 00:02:21.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721083135_collect-cpu-load.pm.log 00:02:21.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721083135_collect-cpu-temp.pm.log 00:02:21.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721083135_collect-bmc-pm.bmc.pm.log 00:02:22.319 00:38:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:22.319 00:38:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:22.319 00:38:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:22.319 00:38:56 -- common/autotest_common.sh@10 -- # set +x 00:02:22.319 00:38:56 -- spdk/autotest.sh@59 -- # create_test_list 00:02:22.319 00:38:56 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:22.319 00:38:56 -- common/autotest_common.sh@10 -- # set +x 00:02:22.319 00:38:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:22.319 00:38:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.319 00:38:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.319 00:38:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:22.319 00:38:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.319 00:38:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:22.319 00:38:56 -- common/autotest_common.sh@1455 -- # uname 00:02:22.319 00:38:56 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:22.319 00:38:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:22.319 00:38:56 -- common/autotest_common.sh@1475 -- # uname 00:02:22.319 00:38:56 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:22.319 00:38:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:22.319 00:38:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:22.319 00:38:56 -- spdk/autotest.sh@72 -- # hash lcov 00:02:22.319 00:38:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:22.319 00:38:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:22.319 --rc lcov_branch_coverage=1 00:02:22.319 --rc lcov_function_coverage=1 00:02:22.319 --rc genhtml_branch_coverage=1 00:02:22.319 --rc genhtml_function_coverage=1 00:02:22.319 --rc genhtml_legend=1 00:02:22.319 --rc geninfo_all_blocks=1 00:02:22.319 ' 00:02:22.319 00:38:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:22.319 --rc lcov_branch_coverage=1 00:02:22.319 --rc lcov_function_coverage=1 00:02:22.319 --rc genhtml_branch_coverage=1 00:02:22.319 --rc genhtml_function_coverage=1 00:02:22.319 --rc genhtml_legend=1 00:02:22.319 --rc geninfo_all_blocks=1 00:02:22.319 ' 00:02:22.319 00:38:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:22.319 --rc lcov_branch_coverage=1 00:02:22.319 --rc lcov_function_coverage=1 00:02:22.319 --rc genhtml_branch_coverage=1 00:02:22.319 --rc genhtml_function_coverage=1 00:02:22.319 --rc genhtml_legend=1 00:02:22.319 --rc geninfo_all_blocks=1 00:02:22.319 --no-external' 00:02:22.319 00:38:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:22.319 --rc lcov_branch_coverage=1 00:02:22.319 --rc lcov_function_coverage=1 00:02:22.319 --rc genhtml_branch_coverage=1 00:02:22.319 --rc genhtml_function_coverage=1 00:02:22.319 --rc genhtml_legend=1 00:02:22.319 --rc geninfo_all_blocks=1 00:02:22.319 --no-external' 00:02:22.319 00:38:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:22.319 lcov: LCOV version 1.14 00:02:22.319 00:38:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:24.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:24.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:24.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:24.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:39.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:39.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:57.184 00:39:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:57.184 00:39:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:57.184 00:39:30 -- common/autotest_common.sh@10 -- # set +x 00:02:57.184 00:39:30 -- spdk/autotest.sh@91 -- # rm -f 00:02:57.184 00:39:30 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.184 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:57.184 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:57.184 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:57.184 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:57.184 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:57.184 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:57.184 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:57.184 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:57.184 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:57.184 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:57.184 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:57.184 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:57.184 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:57.184 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:57.184 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:57.184 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:57.184 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:57.184 00:39:31 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:57.184 00:39:31 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:57.184 00:39:31 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:57.184 00:39:31 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:57.184 00:39:31 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:57.184 00:39:31 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:57.184 00:39:31 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:57.184 00:39:31 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:57.184 00:39:31 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:57.184 00:39:31 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:57.184 00:39:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:57.184 00:39:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:57.184 00:39:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:57.184 00:39:31 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:57.184 00:39:31 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:57.184 No valid GPT data, bailing 00:02:57.184 00:39:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:57.184 00:39:31 -- scripts/common.sh@391 -- # pt= 00:02:57.184 00:39:31 -- scripts/common.sh@392 -- # return 1 00:02:57.184 00:39:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:57.184 1+0 records in 00:02:57.184 1+0 records out 00:02:57.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431293 s, 243 MB/s 00:02:57.184 00:39:31 -- spdk/autotest.sh@118 -- # sync 00:02:57.184 00:39:31 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:57.184 00:39:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:57.184 00:39:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:58.560 00:39:33 -- spdk/autotest.sh@124 -- # uname -s 00:02:58.561 00:39:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:58.561 00:39:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:58.561 00:39:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.561 00:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.561 00:39:33 -- common/autotest_common.sh@10 -- # set +x 00:02:58.561 ************************************ 00:02:58.561 START TEST setup.sh 00:02:58.561 ************************************ 00:02:58.561 00:39:33 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:58.818 * Looking for test storage... 00:02:58.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.818 00:39:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:58.818 00:39:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:58.818 00:39:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:58.818 00:39:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.818 00:39:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.818 00:39:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:58.818 ************************************ 00:02:58.818 START TEST acl 00:02:58.818 ************************************ 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:58.818 * Looking for test storage... 00:02:58.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.818 00:39:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.818 00:39:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:58.818 00:39:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:58.818 00:39:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:58.818 00:39:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:58.818 00:39:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:58.818 00:39:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:58.818 00:39:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.818 00:39:33 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.191 00:39:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:00.191 00:39:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:00.191 00:39:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.191 00:39:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:00.191 00:39:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.191 00:39:34 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:01.564 Hugepages 00:03:01.564 node hugesize free / total 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:03:01.564 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.564 00:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:01.564 00:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:01.564 00:39:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:01.564 00:39:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:01.564 00:39:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:01.564 00:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.564 00:39:36 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:01.565 00:39:36 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:01.565 00:39:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.565 00:39:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.565 00:39:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:01.565 ************************************ 00:03:01.565 START TEST denied 00:03:01.565 ************************************ 00:03:01.565 00:39:36 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:01.565 00:39:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:01.565 00:39:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:01.565 00:39:36 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:01.565 00:39:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.565 00:39:36 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.960 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.960 00:39:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.488 00:03:05.488 real 0m3.757s 00:03:05.488 user 0m1.062s 00:03:05.488 sys 0m1.782s 00:03:05.488 00:39:39 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.488 00:39:39 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:05.488 ************************************ 00:03:05.488 END TEST denied 00:03:05.488 ************************************ 00:03:05.488 00:39:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:05.488 00:39:39 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:05.488 00:39:39 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.488 00:39:39 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.488 00:39:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.488 ************************************ 00:03:05.488 START TEST allowed 00:03:05.488 ************************************ 00:03:05.488 00:39:39 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:05.489 00:39:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:05.489 00:39:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:05.489 00:39:39 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:05.489 00:39:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.489 00:39:39 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:07.391 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.391 00:39:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:07.391 00:39:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:07.391 00:39:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:07.391 00:39:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.391 00:39:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.294 00:03:09.294 real 0m3.829s 00:03:09.294 user 0m0.973s 00:03:09.294 sys 0m1.683s 00:03:09.294 00:39:43 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.294 00:39:43 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:09.294 ************************************ 00:03:09.294 END TEST allowed 00:03:09.294 ************************************ 00:03:09.294 00:39:43 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:09.294 00:03:09.294 real 0m10.328s 00:03:09.294 user 0m3.082s 00:03:09.294 sys 0m5.222s 00:03:09.294 00:39:43 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.294 00:39:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.294 ************************************ 00:03:09.294 END TEST acl 00:03:09.294 ************************************ 00:03:09.294 00:39:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:09.294 00:39:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.294 00:39:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.294 00:39:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.294 00:39:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.294 ************************************ 00:03:09.294 START TEST hugepages 00:03:09.294 ************************************ 00:03:09.294 00:39:43 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.294 * Looking for test storage... 00:03:09.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:09.294 00:39:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43713852 kB' 'MemAvailable: 47219516 kB' 'Buffers: 2704 kB' 'Cached: 10271816 kB' 'SwapCached: 0 kB' 'Active: 7267900 kB' 'Inactive: 3508668 kB' 'Active(anon): 6872412 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505316 kB' 'Mapped: 220028 kB' 'Shmem: 6370364 kB' 'KReclaimable: 190692 kB' 'Slab: 572696 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 382004 kB' 'KernelStack: 12976 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 7992040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.295 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.296 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:09.297 00:39:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:09.297 00:39:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.297 00:39:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.297 00:39:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.297 ************************************ 00:03:09.297 START TEST default_setup 00:03:09.297 ************************************ 00:03:09.297 00:39:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:09.297 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:09.297 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.297 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.297 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:09.297 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.298 00:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.672 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:10.672 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:10.672 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:10.673 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:10.673 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:10.673 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:10.673 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:10.673 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:10.673 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.617 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45813520 kB' 'MemAvailable: 49319184 kB' 'Buffers: 2704 kB' 'Cached: 10271920 kB' 'SwapCached: 0 kB' 'Active: 7286344 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890856 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523664 kB' 'Mapped: 220096 kB' 'Shmem: 6370468 kB' 'KReclaimable: 190692 kB' 'Slab: 571996 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381304 kB' 'KernelStack: 12752 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.617 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.618 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45820132 kB' 'MemAvailable: 49325796 kB' 'Buffers: 2704 kB' 'Cached: 10271924 kB' 'SwapCached: 0 kB' 'Active: 7285896 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890408 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523148 kB' 'Mapped: 220072 kB' 'Shmem: 6370472 kB' 'KReclaimable: 190692 kB' 'Slab: 571980 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381288 kB' 'KernelStack: 12816 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.619 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.620 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45819828 kB' 'MemAvailable: 49325492 kB' 'Buffers: 2704 kB' 'Cached: 10271940 kB' 'SwapCached: 0 kB' 'Active: 7286036 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890548 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523336 kB' 'Mapped: 220072 kB' 'Shmem: 6370488 kB' 'KReclaimable: 190692 kB' 'Slab: 572052 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381360 kB' 'KernelStack: 12800 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.621 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.622 nr_hugepages=1024 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.622 resv_hugepages=0 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.622 surplus_hugepages=0 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.622 anon_hugepages=0 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.622 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45820368 kB' 'MemAvailable: 49326032 kB' 'Buffers: 2704 kB' 'Cached: 10271964 kB' 'SwapCached: 0 kB' 'Active: 7285736 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890248 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523000 kB' 'Mapped: 220072 kB' 'Shmem: 6370512 kB' 'KReclaimable: 190692 kB' 'Slab: 572052 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381360 kB' 'KernelStack: 12784 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.623 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21070952 kB' 'MemUsed: 11805988 kB' 'SwapCached: 0 kB' 'Active: 5485304 kB' 'Inactive: 3265856 kB' 'Active(anon): 5295836 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8442776 kB' 'Mapped: 103792 kB' 'AnonPages: 311544 kB' 'Shmem: 4987452 kB' 'KernelStack: 7720 kB' 'PageTables: 5100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121276 kB' 'Slab: 319156 kB' 'SReclaimable: 121276 kB' 'SUnreclaim: 197880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:11.625 node0=1024 expecting 1024 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:11.625 00:03:11.625 real 0m2.448s 00:03:11.625 user 0m0.659s 00:03:11.625 sys 0m0.950s 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.625 00:39:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:11.625 ************************************ 00:03:11.625 END TEST default_setup 00:03:11.625 ************************************ 00:03:11.625 00:39:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.626 00:39:46 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:11.626 00:39:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.626 00:39:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.626 00:39:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.626 ************************************ 00:03:11.626 START TEST per_node_1G_alloc 00:03:11.626 ************************************ 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.626 00:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.007 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.008 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.008 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.008 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.008 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.008 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.008 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.008 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.008 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.008 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.008 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.008 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.008 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.008 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.008 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.008 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.008 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45821892 kB' 'MemAvailable: 49327556 kB' 'Buffers: 2704 kB' 'Cached: 10272028 kB' 'SwapCached: 0 kB' 'Active: 7285720 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890232 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522864 kB' 'Mapped: 220192 kB' 'Shmem: 6370576 kB' 'KReclaimable: 190692 kB' 'Slab: 572100 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381408 kB' 'KernelStack: 12752 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45821980 kB' 'MemAvailable: 49327644 kB' 'Buffers: 2704 kB' 'Cached: 10272032 kB' 'SwapCached: 0 kB' 'Active: 7285924 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890436 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523088 kB' 'Mapped: 220084 kB' 'Shmem: 6370580 kB' 'KReclaimable: 190692 kB' 'Slab: 572080 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381388 kB' 'KernelStack: 12784 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:13.009 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45822032 kB' 'MemAvailable: 49327696 kB' 'Buffers: 2704 kB' 'Cached: 10272048 kB' 'SwapCached: 0 kB' 'Active: 7285720 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890232 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522828 kB' 'Mapped: 220084 kB' 'Shmem: 6370596 kB' 'KReclaimable: 190692 kB' 'Slab: 572080 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381388 kB' 'KernelStack: 12768 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.013 nr_hugepages=1024 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.013 resv_hugepages=0 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.013 surplus_hugepages=0 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.013 anon_hugepages=0 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45822032 kB' 'MemAvailable: 49327696 kB' 'Buffers: 2704 kB' 'Cached: 10272072 kB' 'SwapCached: 0 kB' 'Active: 7285976 kB' 'Inactive: 3508668 kB' 'Active(anon): 6890488 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523096 kB' 'Mapped: 220084 kB' 'Shmem: 6370620 kB' 'KReclaimable: 190692 kB' 'Slab: 572080 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381388 kB' 'KernelStack: 12784 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.275 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.276 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22117496 kB' 'MemUsed: 10759444 kB' 'SwapCached: 0 kB' 'Active: 5485148 kB' 'Inactive: 3265856 kB' 'Active(anon): 5295680 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8442780 kB' 'Mapped: 104232 kB' 'AnonPages: 311328 kB' 'Shmem: 4987456 kB' 'KernelStack: 7736 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121276 kB' 'Slab: 319184 kB' 'SReclaimable: 121276 kB' 'SUnreclaim: 197908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.277 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23700532 kB' 'MemUsed: 3964220 kB' 'SwapCached: 0 kB' 'Active: 1804012 kB' 'Inactive: 242812 kB' 'Active(anon): 1597992 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1832036 kB' 'Mapped: 116288 kB' 'AnonPages: 214848 kB' 'Shmem: 1383204 kB' 'KernelStack: 5048 kB' 'PageTables: 3200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69416 kB' 'Slab: 252896 kB' 'SReclaimable: 69416 kB' 'SUnreclaim: 183480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.278 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:13.279 node0=512 expecting 512 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:13.279 node1=512 expecting 512 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:13.279 00:03:13.279 real 0m1.467s 00:03:13.279 user 0m0.608s 00:03:13.279 sys 0m0.813s 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.279 00:39:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.279 ************************************ 00:03:13.279 END TEST per_node_1G_alloc 00:03:13.279 ************************************ 00:03:13.279 00:39:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.279 00:39:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:13.279 00:39:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.279 00:39:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.279 00:39:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.279 ************************************ 00:03:13.279 START TEST even_2G_alloc 00:03:13.279 ************************************ 00:03:13.279 00:39:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.280 00:39:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.214 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.214 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.214 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.214 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.214 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.214 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.214 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.214 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.214 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.214 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.214 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.214 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.214 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.214 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.214 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.214 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.214 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45803908 kB' 'MemAvailable: 49309572 kB' 'Buffers: 2704 kB' 'Cached: 10272168 kB' 'SwapCached: 0 kB' 'Active: 7284984 kB' 'Inactive: 3508668 kB' 'Active(anon): 6889496 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522036 kB' 'Mapped: 219864 kB' 'Shmem: 6370716 kB' 'KReclaimable: 190692 kB' 'Slab: 572148 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381456 kB' 'KernelStack: 12768 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.477 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45806980 kB' 'MemAvailable: 49312644 kB' 'Buffers: 2704 kB' 'Cached: 10272172 kB' 'SwapCached: 0 kB' 'Active: 7283236 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887748 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520340 kB' 'Mapped: 219224 kB' 'Shmem: 6370720 kB' 'KReclaimable: 190692 kB' 'Slab: 572196 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381504 kB' 'KernelStack: 12688 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45807592 kB' 'MemAvailable: 49313256 kB' 'Buffers: 2704 kB' 'Cached: 10272188 kB' 'SwapCached: 0 kB' 'Active: 7282540 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887052 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519440 kB' 'Mapped: 219148 kB' 'Shmem: 6370736 kB' 'KReclaimable: 190692 kB' 'Slab: 572192 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381500 kB' 'KernelStack: 12704 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.481 nr_hugepages=1024 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.481 resv_hugepages=0 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.481 surplus_hugepages=0 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.481 anon_hugepages=0 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45811388 kB' 'MemAvailable: 49317052 kB' 'Buffers: 2704 kB' 'Cached: 10272212 kB' 'SwapCached: 0 kB' 'Active: 7282660 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887172 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519532 kB' 'Mapped: 219032 kB' 'Shmem: 6370760 kB' 'KReclaimable: 190692 kB' 'Slab: 572192 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381500 kB' 'KernelStack: 12656 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22122560 kB' 'MemUsed: 10754380 kB' 'SwapCached: 0 kB' 'Active: 5483588 kB' 'Inactive: 3265856 kB' 'Active(anon): 5294120 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8442864 kB' 'Mapped: 103068 kB' 'AnonPages: 309700 kB' 'Shmem: 4987540 kB' 'KernelStack: 7608 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121276 kB' 'Slab: 319308 kB' 'SReclaimable: 121276 kB' 'SUnreclaim: 198032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.743 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23687972 kB' 'MemUsed: 3976780 kB' 'SwapCached: 0 kB' 'Active: 1798824 kB' 'Inactive: 242812 kB' 'Active(anon): 1592804 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1832068 kB' 'Mapped: 116080 kB' 'AnonPages: 209568 kB' 'Shmem: 1383236 kB' 'KernelStack: 5000 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69416 kB' 'Slab: 252876 kB' 'SReclaimable: 69416 kB' 'SUnreclaim: 183460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.745 node0=512 expecting 512 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:14.745 node1=512 expecting 512 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:14.745 00:03:14.745 real 0m1.395s 00:03:14.745 user 0m0.583s 00:03:14.745 sys 0m0.762s 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.745 00:39:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:14.745 ************************************ 00:03:14.745 END TEST even_2G_alloc 00:03:14.745 ************************************ 00:03:14.745 00:39:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:14.745 00:39:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:14.745 00:39:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.745 00:39:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.745 00:39:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.745 ************************************ 00:03:14.745 START TEST odd_alloc 00:03:14.745 ************************************ 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.745 00:39:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.125 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.125 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.125 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.125 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.125 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.125 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.125 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.125 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.125 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:16.125 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.125 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.125 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.125 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.125 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.125 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.125 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.125 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45813492 kB' 'MemAvailable: 49319156 kB' 'Buffers: 2704 kB' 'Cached: 10272300 kB' 'SwapCached: 0 kB' 'Active: 7285208 kB' 'Inactive: 3508668 kB' 'Active(anon): 6889720 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522096 kB' 'Mapped: 219236 kB' 'Shmem: 6370848 kB' 'KReclaimable: 190692 kB' 'Slab: 572056 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381364 kB' 'KernelStack: 13168 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7999028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.125 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.126 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45824216 kB' 'MemAvailable: 49329880 kB' 'Buffers: 2704 kB' 'Cached: 10272300 kB' 'SwapCached: 0 kB' 'Active: 7284628 kB' 'Inactive: 3508668 kB' 'Active(anon): 6889140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521500 kB' 'Mapped: 219184 kB' 'Shmem: 6370848 kB' 'KReclaimable: 190692 kB' 'Slab: 572032 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381340 kB' 'KernelStack: 12768 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7999044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.127 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.128 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45824852 kB' 'MemAvailable: 49330516 kB' 'Buffers: 2704 kB' 'Cached: 10272300 kB' 'SwapCached: 0 kB' 'Active: 7283464 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887976 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520320 kB' 'Mapped: 219160 kB' 'Shmem: 6370848 kB' 'KReclaimable: 190692 kB' 'Slab: 572032 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381340 kB' 'KernelStack: 12784 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7999064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196404 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.129 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.130 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:16.131 nr_hugepages=1025 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.131 resv_hugepages=0 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.131 surplus_hugepages=0 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.131 anon_hugepages=0 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45825496 kB' 'MemAvailable: 49331160 kB' 'Buffers: 2704 kB' 'Cached: 10272320 kB' 'SwapCached: 0 kB' 'Active: 7282972 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887484 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519808 kB' 'Mapped: 219160 kB' 'Shmem: 6370868 kB' 'KReclaimable: 190692 kB' 'Slab: 572096 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381404 kB' 'KernelStack: 12768 kB' 'PageTables: 7484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7999084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196404 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.131 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.132 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22119128 kB' 'MemUsed: 10757812 kB' 'SwapCached: 0 kB' 'Active: 5483528 kB' 'Inactive: 3265856 kB' 'Active(anon): 5294060 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8442996 kB' 'Mapped: 103068 kB' 'AnonPages: 309568 kB' 'Shmem: 4987672 kB' 'KernelStack: 7720 kB' 'PageTables: 4832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121276 kB' 'Slab: 319216 kB' 'SReclaimable: 121276 kB' 'SUnreclaim: 197940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23706116 kB' 'MemUsed: 3958636 kB' 'SwapCached: 0 kB' 'Active: 1799444 kB' 'Inactive: 242812 kB' 'Active(anon): 1593424 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1832072 kB' 'Mapped: 116092 kB' 'AnonPages: 210260 kB' 'Shmem: 1383240 kB' 'KernelStack: 5048 kB' 'PageTables: 3132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69416 kB' 'Slab: 252912 kB' 'SReclaimable: 69416 kB' 'SUnreclaim: 183496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.134 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:16.136 node0=512 expecting 513 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:16.136 node1=513 expecting 512 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:16.136 00:03:16.136 real 0m1.496s 00:03:16.136 user 0m0.646s 00:03:16.136 sys 0m0.814s 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.136 00:39:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.136 ************************************ 00:03:16.136 END TEST odd_alloc 00:03:16.136 ************************************ 00:03:16.136 00:39:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.136 00:39:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:16.136 00:39:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.136 00:39:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.136 00:39:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.136 ************************************ 00:03:16.136 START TEST custom_alloc 00:03:16.136 ************************************ 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.136 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.137 00:39:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.515 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.515 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.515 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.515 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.515 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.515 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.515 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.515 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.515 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.515 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.515 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.515 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.515 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.515 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.515 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.515 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.515 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.515 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44758388 kB' 'MemAvailable: 48264052 kB' 'Buffers: 2704 kB' 'Cached: 10272436 kB' 'SwapCached: 0 kB' 'Active: 7283652 kB' 'Inactive: 3508668 kB' 'Active(anon): 6888164 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520352 kB' 'Mapped: 219296 kB' 'Shmem: 6370984 kB' 'KReclaimable: 190692 kB' 'Slab: 572140 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381448 kB' 'KernelStack: 12784 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7999284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44758732 kB' 'MemAvailable: 48264396 kB' 'Buffers: 2704 kB' 'Cached: 10272440 kB' 'SwapCached: 0 kB' 'Active: 7283136 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887648 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519776 kB' 'Mapped: 219244 kB' 'Shmem: 6370988 kB' 'KReclaimable: 190692 kB' 'Slab: 572116 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381424 kB' 'KernelStack: 12752 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7999304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.516 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44759216 kB' 'MemAvailable: 48264880 kB' 'Buffers: 2704 kB' 'Cached: 10272456 kB' 'SwapCached: 0 kB' 'Active: 7282980 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887492 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519572 kB' 'Mapped: 219168 kB' 'Shmem: 6371004 kB' 'KReclaimable: 190692 kB' 'Slab: 572132 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381440 kB' 'KernelStack: 12736 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7999324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.517 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:17.518 nr_hugepages=1536 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.518 resv_hugepages=0 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.518 surplus_hugepages=0 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.518 anon_hugepages=0 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44759472 kB' 'MemAvailable: 48265136 kB' 'Buffers: 2704 kB' 'Cached: 10272480 kB' 'SwapCached: 0 kB' 'Active: 7283044 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887556 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519648 kB' 'Mapped: 219168 kB' 'Shmem: 6371028 kB' 'KReclaimable: 190692 kB' 'Slab: 572132 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381440 kB' 'KernelStack: 12768 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7999344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.518 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.519 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22106872 kB' 'MemUsed: 10770068 kB' 'SwapCached: 0 kB' 'Active: 5483672 kB' 'Inactive: 3265856 kB' 'Active(anon): 5294204 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8443124 kB' 'Mapped: 103064 kB' 'AnonPages: 309568 kB' 'Shmem: 4987800 kB' 'KernelStack: 7720 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121276 kB' 'Slab: 319228 kB' 'SReclaimable: 121276 kB' 'SUnreclaim: 197952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.780 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22653492 kB' 'MemUsed: 5011260 kB' 'SwapCached: 0 kB' 'Active: 1799552 kB' 'Inactive: 242812 kB' 'Active(anon): 1593532 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1832080 kB' 'Mapped: 116104 kB' 'AnonPages: 210284 kB' 'Shmem: 1383248 kB' 'KernelStack: 5064 kB' 'PageTables: 3088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69416 kB' 'Slab: 252904 kB' 'SReclaimable: 69416 kB' 'SUnreclaim: 183488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.781 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.782 node0=512 expecting 512 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:17.782 node1=1024 expecting 1024 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:17.782 00:03:17.782 real 0m1.471s 00:03:17.782 user 0m0.608s 00:03:17.782 sys 0m0.824s 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.782 00:39:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.782 ************************************ 00:03:17.782 END TEST custom_alloc 00:03:17.782 ************************************ 00:03:17.782 00:39:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.782 00:39:52 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:17.782 00:39:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.782 00:39:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.782 00:39:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.782 ************************************ 00:03:17.782 START TEST no_shrink_alloc 00:03:17.782 ************************************ 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.782 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.783 00:39:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.197 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.197 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.197 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.197 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.197 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.197 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.197 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.197 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.198 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.198 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.198 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.198 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.198 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.198 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.198 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.198 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.198 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45765644 kB' 'MemAvailable: 49271308 kB' 'Buffers: 2704 kB' 'Cached: 10272564 kB' 'SwapCached: 0 kB' 'Active: 7283444 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887956 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520084 kB' 'Mapped: 219316 kB' 'Shmem: 6371112 kB' 'KReclaimable: 190692 kB' 'Slab: 572100 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381408 kB' 'KernelStack: 12800 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7999740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.198 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45764668 kB' 'MemAvailable: 49270332 kB' 'Buffers: 2704 kB' 'Cached: 10272568 kB' 'SwapCached: 0 kB' 'Active: 7283332 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887844 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519968 kB' 'Mapped: 219260 kB' 'Shmem: 6371116 kB' 'KReclaimable: 190692 kB' 'Slab: 572100 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381408 kB' 'KernelStack: 12800 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7999756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.199 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.200 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45766140 kB' 'MemAvailable: 49271804 kB' 'Buffers: 2704 kB' 'Cached: 10272584 kB' 'SwapCached: 0 kB' 'Active: 7283436 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887948 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520048 kB' 'Mapped: 219184 kB' 'Shmem: 6371132 kB' 'KReclaimable: 190692 kB' 'Slab: 572108 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381416 kB' 'KernelStack: 12832 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7999780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.201 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.202 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.203 nr_hugepages=1024 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.203 resv_hugepages=0 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.203 surplus_hugepages=0 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.203 anon_hugepages=0 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45766796 kB' 'MemAvailable: 49272460 kB' 'Buffers: 2704 kB' 'Cached: 10272608 kB' 'SwapCached: 0 kB' 'Active: 7283472 kB' 'Inactive: 3508668 kB' 'Active(anon): 6887984 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520048 kB' 'Mapped: 219184 kB' 'Shmem: 6371156 kB' 'KReclaimable: 190692 kB' 'Slab: 572108 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381416 kB' 'KernelStack: 12832 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7999804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.203 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.204 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21042960 kB' 'MemUsed: 11833980 kB' 'SwapCached: 0 kB' 'Active: 5483288 kB' 'Inactive: 3265856 kB' 'Active(anon): 5293820 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8443176 kB' 'Mapped: 103064 kB' 'AnonPages: 309144 kB' 'Shmem: 4987852 kB' 'KernelStack: 7704 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121276 kB' 'Slab: 319184 kB' 'SReclaimable: 121276 kB' 'SUnreclaim: 197908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.205 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:19.206 node0=1024 expecting 1024 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.206 00:39:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.588 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.588 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.588 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.588 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.588 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.588 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.589 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.589 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.589 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.589 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.589 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.589 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.589 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.589 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.589 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.589 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.589 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.589 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45770208 kB' 'MemAvailable: 49275872 kB' 'Buffers: 2704 kB' 'Cached: 10272676 kB' 'SwapCached: 0 kB' 'Active: 7286736 kB' 'Inactive: 3508668 kB' 'Active(anon): 6891248 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523272 kB' 'Mapped: 219352 kB' 'Shmem: 6371224 kB' 'KReclaimable: 190692 kB' 'Slab: 572148 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381456 kB' 'KernelStack: 13104 kB' 'PageTables: 9720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8002216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196916 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45779096 kB' 'MemAvailable: 49284760 kB' 'Buffers: 2704 kB' 'Cached: 10272676 kB' 'SwapCached: 0 kB' 'Active: 7284404 kB' 'Inactive: 3508668 kB' 'Active(anon): 6888916 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520808 kB' 'Mapped: 219208 kB' 'Shmem: 6371224 kB' 'KReclaimable: 190692 kB' 'Slab: 572144 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381452 kB' 'KernelStack: 13056 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8000688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196772 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.590 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.591 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45780580 kB' 'MemAvailable: 49286244 kB' 'Buffers: 2704 kB' 'Cached: 10272684 kB' 'SwapCached: 0 kB' 'Active: 7283668 kB' 'Inactive: 3508668 kB' 'Active(anon): 6888180 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520088 kB' 'Mapped: 219208 kB' 'Shmem: 6371232 kB' 'KReclaimable: 190692 kB' 'Slab: 572136 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381444 kB' 'KernelStack: 12688 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7999896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.592 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.593 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.594 nr_hugepages=1024 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.594 resv_hugepages=0 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.594 surplus_hugepages=0 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.594 anon_hugepages=0 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45781520 kB' 'MemAvailable: 49287184 kB' 'Buffers: 2704 kB' 'Cached: 10272688 kB' 'SwapCached: 0 kB' 'Active: 7283644 kB' 'Inactive: 3508668 kB' 'Active(anon): 6888156 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520072 kB' 'Mapped: 219192 kB' 'Shmem: 6371236 kB' 'KReclaimable: 190692 kB' 'Slab: 572168 kB' 'SReclaimable: 190692 kB' 'SUnreclaim: 381476 kB' 'KernelStack: 12752 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7999916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2491996 kB' 'DirectMap2M: 20496384 kB' 'DirectMap1G: 46137344 kB' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.594 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.595 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21057144 kB' 'MemUsed: 11819796 kB' 'SwapCached: 0 kB' 'Active: 5483760 kB' 'Inactive: 3265856 kB' 'Active(anon): 5294292 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8443188 kB' 'Mapped: 103064 kB' 'AnonPages: 309544 kB' 'Shmem: 4987864 kB' 'KernelStack: 7704 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121276 kB' 'Slab: 319160 kB' 'SReclaimable: 121276 kB' 'SUnreclaim: 197884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.596 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.597 node0=1024 expecting 1024 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.597 00:03:20.597 real 0m2.920s 00:03:20.597 user 0m1.194s 00:03:20.597 sys 0m1.651s 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.597 00:39:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.597 ************************************ 00:03:20.597 END TEST no_shrink_alloc 00:03:20.597 ************************************ 00:03:20.597 00:39:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.597 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.598 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.598 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.598 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.598 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:20.598 00:39:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:20.598 00:03:20.598 real 0m11.573s 00:03:20.598 user 0m4.464s 00:03:20.598 sys 0m6.042s 00:03:20.598 00:39:55 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.598 00:39:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.598 ************************************ 00:03:20.598 END TEST hugepages 00:03:20.598 ************************************ 00:03:20.598 00:39:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:20.598 00:39:55 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:20.598 00:39:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.598 00:39:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.598 00:39:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:20.857 ************************************ 00:03:20.857 START TEST driver 00:03:20.857 ************************************ 00:03:20.857 00:39:55 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:20.857 * Looking for test storage... 00:03:20.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.857 00:39:55 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:20.857 00:39:55 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.857 00:39:55 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.394 00:39:57 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:23.394 00:39:57 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.394 00:39:57 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.394 00:39:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:23.394 ************************************ 00:03:23.394 START TEST guess_driver 00:03:23.394 ************************************ 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:23.394 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:23.394 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:23.394 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:23.394 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:23.394 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:23.394 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:23.394 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:23.394 Looking for driver=vfio-pci 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.394 00:39:57 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.768 00:39:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.702 00:40:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.702 00:40:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.702 00:40:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.702 00:40:00 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:25.702 00:40:00 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:25.702 00:40:00 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.702 00:40:00 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.234 00:03:28.234 real 0m4.881s 00:03:28.234 user 0m1.117s 00:03:28.234 sys 0m1.873s 00:03:28.234 00:40:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.234 00:40:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:28.234 ************************************ 00:03:28.234 END TEST guess_driver 00:03:28.234 ************************************ 00:03:28.234 00:40:02 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:28.234 00:03:28.234 real 0m7.512s 00:03:28.234 user 0m1.677s 00:03:28.234 sys 0m2.947s 00:03:28.234 00:40:02 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.234 00:40:02 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:28.234 ************************************ 00:03:28.234 END TEST driver 00:03:28.234 ************************************ 00:03:28.234 00:40:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:28.234 00:40:02 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:28.234 00:40:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.234 00:40:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.234 00:40:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:28.234 ************************************ 00:03:28.234 START TEST devices 00:03:28.234 ************************************ 00:03:28.234 00:40:02 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:28.234 * Looking for test storage... 00:03:28.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:28.234 00:40:02 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:28.234 00:40:02 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:28.234 00:40:02 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.234 00:40:02 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:30.138 00:40:04 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:30.138 No valid GPT data, bailing 00:03:30.138 00:40:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:30.138 00:40:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:30.138 00:40:04 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:30.138 00:40:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.138 00:40:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:30.138 ************************************ 00:03:30.138 START TEST nvme_mount 00:03:30.138 ************************************ 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:30.138 00:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:31.075 Creating new GPT entries in memory. 00:03:31.075 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:31.075 other utilities. 00:03:31.075 00:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:31.075 00:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:31.075 00:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:31.075 00:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:31.075 00:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:32.014 Creating new GPT entries in memory. 00:03:32.015 The operation has completed successfully. 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2507481 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.015 00:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.948 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.208 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.208 00:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.467 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:33.467 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:33.467 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:33.467 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.467 00:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.468 00:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.402 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.661 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.662 00:40:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.039 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:36.039 00:03:36.039 real 0m6.072s 00:03:36.039 user 0m1.406s 00:03:36.039 sys 0m2.241s 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.039 00:40:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:36.039 ************************************ 00:03:36.039 END TEST nvme_mount 00:03:36.039 ************************************ 00:03:36.039 00:40:10 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:36.039 00:40:10 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:36.039 00:40:10 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.039 00:40:10 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.039 00:40:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:36.039 ************************************ 00:03:36.039 START TEST dm_mount 00:03:36.039 ************************************ 00:03:36.039 00:40:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:36.039 00:40:10 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:36.040 00:40:10 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:36.978 Creating new GPT entries in memory. 00:03:36.978 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:36.978 other utilities. 00:03:36.978 00:40:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:36.978 00:40:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.978 00:40:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:36.978 00:40:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:36.978 00:40:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:38.361 Creating new GPT entries in memory. 00:03:38.361 The operation has completed successfully. 00:03:38.361 00:40:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:38.361 00:40:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.361 00:40:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.361 00:40:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.361 00:40:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:39.305 The operation has completed successfully. 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2509873 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:39.305 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.306 00:40:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.239 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.240 00:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.499 00:40:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.436 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.437 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:41.695 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:41.695 00:03:41.695 real 0m5.680s 00:03:41.695 user 0m0.931s 00:03:41.695 sys 0m1.587s 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.695 00:40:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:41.695 ************************************ 00:03:41.695 END TEST dm_mount 00:03:41.695 ************************************ 00:03:41.695 00:40:16 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:41.695 00:40:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:41.695 00:40:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:41.695 00:40:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.695 00:40:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.695 00:40:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:41.695 00:40:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.695 00:40:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:41.955 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:41.955 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:41.955 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:41.955 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:41.955 00:40:16 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:41.955 00:40:16 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.955 00:40:16 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.955 00:40:16 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.955 00:40:16 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.955 00:40:16 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.955 00:40:16 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:41.955 00:03:41.955 real 0m13.743s 00:03:41.955 user 0m3.056s 00:03:41.955 sys 0m4.868s 00:03:41.955 00:40:16 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.955 00:40:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:41.955 ************************************ 00:03:41.955 END TEST devices 00:03:41.955 ************************************ 00:03:41.955 00:40:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:41.955 00:03:41.955 real 0m43.387s 00:03:41.955 user 0m12.363s 00:03:41.955 sys 0m19.239s 00:03:41.955 00:40:16 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.955 00:40:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.955 ************************************ 00:03:41.955 END TEST setup.sh 00:03:41.955 ************************************ 00:03:41.955 00:40:16 -- common/autotest_common.sh@1142 -- # return 0 00:03:41.955 00:40:16 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:43.331 Hugepages 00:03:43.331 node hugesize free / total 00:03:43.331 node0 1048576kB 0 / 0 00:03:43.331 node0 2048kB 2048 / 2048 00:03:43.331 node1 1048576kB 0 / 0 00:03:43.331 node1 2048kB 0 / 0 00:03:43.331 00:03:43.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.331 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:43.331 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:43.331 00:40:17 -- spdk/autotest.sh@130 -- # uname -s 00:03:43.331 00:40:17 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:43.331 00:40:17 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:43.331 00:40:17 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.707 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:44.707 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:44.707 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:44.707 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:44.707 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:44.707 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:44.707 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:44.707 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:44.707 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:45.644 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:45.644 00:40:20 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:46.578 00:40:21 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:46.578 00:40:21 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:46.578 00:40:21 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:46.578 00:40:21 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:46.578 00:40:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:46.578 00:40:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:46.578 00:40:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.578 00:40:21 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:46.578 00:40:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:46.836 00:40:21 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:46.836 00:40:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:46.836 00:40:21 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.768 Waiting for block devices as requested 00:03:47.768 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:48.026 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:48.026 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:48.026 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:48.284 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:48.284 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:48.284 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:48.284 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:48.542 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:48.542 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:48.542 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:48.542 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:48.799 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:48.799 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:48.799 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:48.799 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:49.057 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:49.057 00:40:23 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:49.057 00:40:23 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:49.057 00:40:23 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:49.057 00:40:23 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:49.057 00:40:23 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:49.057 00:40:23 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:49.057 00:40:23 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:49.057 00:40:23 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:49.057 00:40:23 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:49.058 00:40:23 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:49.058 00:40:23 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:49.058 00:40:23 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:49.058 00:40:23 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:49.058 00:40:23 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:49.058 00:40:23 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:49.058 00:40:23 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:49.058 00:40:23 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:49.058 00:40:23 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:49.058 00:40:23 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:49.058 00:40:23 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:49.058 00:40:23 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:49.058 00:40:23 -- common/autotest_common.sh@1557 -- # continue 00:03:49.058 00:40:23 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:49.058 00:40:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:49.058 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:03:49.058 00:40:23 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:49.058 00:40:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.058 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:03:49.058 00:40:23 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.429 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:50.429 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:50.429 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:50.429 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:50.429 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:50.429 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:50.429 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:50.429 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:50.429 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:51.392 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.651 00:40:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:51.651 00:40:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:51.651 00:40:26 -- common/autotest_common.sh@10 -- # set +x 00:03:51.651 00:40:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:51.651 00:40:26 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:51.651 00:40:26 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:51.651 00:40:26 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:51.651 00:40:26 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:51.651 00:40:26 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:51.651 00:40:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:51.651 00:40:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:51.651 00:40:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.651 00:40:26 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.651 00:40:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:51.651 00:40:26 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:51.651 00:40:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:51.651 00:40:26 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:51.651 00:40:26 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:51.651 00:40:26 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:51.651 00:40:26 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:51.651 00:40:26 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:51.651 00:40:26 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:51.651 00:40:26 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:51.651 00:40:26 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2515061 00:03:51.651 00:40:26 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.651 00:40:26 -- common/autotest_common.sh@1598 -- # waitforlisten 2515061 00:03:51.651 00:40:26 -- common/autotest_common.sh@829 -- # '[' -z 2515061 ']' 00:03:51.651 00:40:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.651 00:40:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:51.651 00:40:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.651 00:40:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:51.651 00:40:26 -- common/autotest_common.sh@10 -- # set +x 00:03:51.651 [2024-07-16 00:40:26.319389] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:03:51.651 [2024-07-16 00:40:26.319470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515061 ] 00:03:51.651 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.651 [2024-07-16 00:40:26.381501] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.910 [2024-07-16 00:40:26.498045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.844 00:40:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:52.844 00:40:27 -- common/autotest_common.sh@862 -- # return 0 00:03:52.844 00:40:27 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:52.844 00:40:27 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:52.844 00:40:27 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:56.189 nvme0n1 00:03:56.189 00:40:30 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:56.189 [2024-07-16 00:40:30.560557] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:56.189 [2024-07-16 00:40:30.560617] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:56.189 request: 00:03:56.189 { 00:03:56.189 "nvme_ctrlr_name": "nvme0", 00:03:56.189 "password": "test", 00:03:56.189 "method": "bdev_nvme_opal_revert", 00:03:56.189 "req_id": 1 00:03:56.189 } 00:03:56.189 Got JSON-RPC error response 00:03:56.189 response: 00:03:56.189 { 00:03:56.189 "code": -32603, 00:03:56.189 "message": "Internal error" 00:03:56.189 } 00:03:56.189 00:40:30 -- common/autotest_common.sh@1604 -- # true 00:03:56.190 00:40:30 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:56.190 00:40:30 -- common/autotest_common.sh@1608 -- # killprocess 2515061 00:03:56.190 00:40:30 -- common/autotest_common.sh@948 -- # '[' -z 2515061 ']' 00:03:56.190 00:40:30 -- common/autotest_common.sh@952 -- # kill -0 2515061 00:03:56.190 00:40:30 -- common/autotest_common.sh@953 -- # uname 00:03:56.190 00:40:30 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:56.190 00:40:30 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2515061 00:03:56.190 00:40:30 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:56.190 00:40:30 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:56.190 00:40:30 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2515061' 00:03:56.190 killing process with pid 2515061 00:03:56.190 00:40:30 -- common/autotest_common.sh@967 -- # kill 2515061 00:03:56.190 00:40:30 -- common/autotest_common.sh@972 -- # wait 2515061 00:03:58.087 00:40:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:58.087 00:40:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:58.087 00:40:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:58.087 00:40:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:58.087 00:40:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:58.087 00:40:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.087 00:40:32 -- common/autotest_common.sh@10 -- # set +x 00:03:58.087 00:40:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:58.087 00:40:32 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:58.087 00:40:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.087 00:40:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.087 00:40:32 -- common/autotest_common.sh@10 -- # set +x 00:03:58.087 ************************************ 00:03:58.087 START TEST env 00:03:58.087 ************************************ 00:03:58.087 00:40:32 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:58.087 * Looking for test storage... 00:03:58.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:58.087 00:40:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:58.087 00:40:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.087 00:40:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.087 00:40:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.087 ************************************ 00:03:58.087 START TEST env_memory 00:03:58.087 ************************************ 00:03:58.087 00:40:32 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:58.087 00:03:58.087 00:03:58.087 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.087 http://cunit.sourceforge.net/ 00:03:58.087 00:03:58.087 00:03:58.087 Suite: memory 00:03:58.087 Test: alloc and free memory map ...[2024-07-16 00:40:32.567304] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:58.087 passed 00:03:58.087 Test: mem map translation ...[2024-07-16 00:40:32.588248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:58.087 [2024-07-16 00:40:32.588270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:58.087 [2024-07-16 00:40:32.588328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:58.087 [2024-07-16 00:40:32.588340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:58.087 passed 00:03:58.088 Test: mem map registration ...[2024-07-16 00:40:32.631676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:58.088 [2024-07-16 00:40:32.631696] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:58.088 passed 00:03:58.088 Test: mem map adjacent registrations ...passed 00:03:58.088 00:03:58.088 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.088 suites 1 1 n/a 0 0 00:03:58.088 tests 4 4 4 0 0 00:03:58.088 asserts 152 152 152 0 n/a 00:03:58.088 00:03:58.088 Elapsed time = 0.148 seconds 00:03:58.088 00:03:58.088 real 0m0.155s 00:03:58.088 user 0m0.146s 00:03:58.088 sys 0m0.009s 00:03:58.088 00:40:32 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.088 00:40:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:58.088 ************************************ 00:03:58.088 END TEST env_memory 00:03:58.088 ************************************ 00:03:58.088 00:40:32 env -- common/autotest_common.sh@1142 -- # return 0 00:03:58.088 00:40:32 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:58.088 00:40:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.088 00:40:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.088 00:40:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.088 ************************************ 00:03:58.088 START TEST env_vtophys 00:03:58.088 ************************************ 00:03:58.088 00:40:32 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:58.088 EAL: lib.eal log level changed from notice to debug 00:03:58.088 EAL: Detected lcore 0 as core 0 on socket 0 00:03:58.088 EAL: Detected lcore 1 as core 1 on socket 0 00:03:58.088 EAL: Detected lcore 2 as core 2 on socket 0 00:03:58.088 EAL: Detected lcore 3 as core 3 on socket 0 00:03:58.088 EAL: Detected lcore 4 as core 4 on socket 0 00:03:58.088 EAL: Detected lcore 5 as core 5 on socket 0 00:03:58.088 EAL: Detected lcore 6 as core 8 on socket 0 00:03:58.088 EAL: Detected lcore 7 as core 9 on socket 0 00:03:58.088 EAL: Detected lcore 8 as core 10 on socket 0 00:03:58.088 EAL: Detected lcore 9 as core 11 on socket 0 00:03:58.088 EAL: Detected lcore 10 as core 12 on socket 0 00:03:58.088 EAL: Detected lcore 11 as core 13 on socket 0 00:03:58.088 EAL: Detected lcore 12 as core 0 on socket 1 00:03:58.088 EAL: Detected lcore 13 as core 1 on socket 1 00:03:58.088 EAL: Detected lcore 14 as core 2 on socket 1 00:03:58.088 EAL: Detected lcore 15 as core 3 on socket 1 00:03:58.088 EAL: Detected lcore 16 as core 4 on socket 1 00:03:58.088 EAL: Detected lcore 17 as core 5 on socket 1 00:03:58.088 EAL: Detected lcore 18 as core 8 on socket 1 00:03:58.088 EAL: Detected lcore 19 as core 9 on socket 1 00:03:58.088 EAL: Detected lcore 20 as core 10 on socket 1 00:03:58.088 EAL: Detected lcore 21 as core 11 on socket 1 00:03:58.088 EAL: Detected lcore 22 as core 12 on socket 1 00:03:58.088 EAL: Detected lcore 23 as core 13 on socket 1 00:03:58.088 EAL: Detected lcore 24 as core 0 on socket 0 00:03:58.088 EAL: Detected lcore 25 as core 1 on socket 0 00:03:58.088 EAL: Detected lcore 26 as core 2 on socket 0 00:03:58.088 EAL: Detected lcore 27 as core 3 on socket 0 00:03:58.088 EAL: Detected lcore 28 as core 4 on socket 0 00:03:58.088 EAL: Detected lcore 29 as core 5 on socket 0 00:03:58.088 EAL: Detected lcore 30 as core 8 on socket 0 00:03:58.088 EAL: Detected lcore 31 as core 9 on socket 0 00:03:58.088 EAL: Detected lcore 32 as core 10 on socket 0 00:03:58.088 EAL: Detected lcore 33 as core 11 on socket 0 00:03:58.088 EAL: Detected lcore 34 as core 12 on socket 0 00:03:58.088 EAL: Detected lcore 35 as core 13 on socket 0 00:03:58.088 EAL: Detected lcore 36 as core 0 on socket 1 00:03:58.088 EAL: Detected lcore 37 as core 1 on socket 1 00:03:58.088 EAL: Detected lcore 38 as core 2 on socket 1 00:03:58.088 EAL: Detected lcore 39 as core 3 on socket 1 00:03:58.088 EAL: Detected lcore 40 as core 4 on socket 1 00:03:58.088 EAL: Detected lcore 41 as core 5 on socket 1 00:03:58.088 EAL: Detected lcore 42 as core 8 on socket 1 00:03:58.088 EAL: Detected lcore 43 as core 9 on socket 1 00:03:58.088 EAL: Detected lcore 44 as core 10 on socket 1 00:03:58.088 EAL: Detected lcore 45 as core 11 on socket 1 00:03:58.088 EAL: Detected lcore 46 as core 12 on socket 1 00:03:58.088 EAL: Detected lcore 47 as core 13 on socket 1 00:03:58.088 EAL: Maximum logical cores by configuration: 128 00:03:58.088 EAL: Detected CPU lcores: 48 00:03:58.088 EAL: Detected NUMA nodes: 2 00:03:58.088 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:58.088 EAL: Detected shared linkage of DPDK 00:03:58.088 EAL: No shared files mode enabled, IPC will be disabled 00:03:58.088 EAL: Bus pci wants IOVA as 'DC' 00:03:58.088 EAL: Buses did not request a specific IOVA mode. 00:03:58.088 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:58.088 EAL: Selected IOVA mode 'VA' 00:03:58.088 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.088 EAL: Probing VFIO support... 00:03:58.088 EAL: IOMMU type 1 (Type 1) is supported 00:03:58.088 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:58.088 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:58.088 EAL: VFIO support initialized 00:03:58.088 EAL: Ask a virtual area of 0x2e000 bytes 00:03:58.088 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:58.088 EAL: Setting up physically contiguous memory... 00:03:58.088 EAL: Setting maximum number of open files to 524288 00:03:58.088 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:58.088 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:58.088 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:58.088 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:58.088 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.088 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:58.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.088 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.088 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:58.088 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:58.088 EAL: Hugepages will be freed exactly as allocated. 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: TSC frequency is ~2700000 KHz 00:03:58.088 EAL: Main lcore 0 is ready (tid=7f28697e2a00;cpuset=[0]) 00:03:58.088 EAL: Trying to obtain current memory policy. 00:03:58.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.088 EAL: Restoring previous memory policy: 0 00:03:58.088 EAL: request: mp_malloc_sync 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: Heap on socket 0 was expanded by 2MB 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:58.088 EAL: Mem event callback 'spdk:(nil)' registered 00:03:58.088 00:03:58.088 00:03:58.088 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.088 http://cunit.sourceforge.net/ 00:03:58.088 00:03:58.088 00:03:58.088 Suite: components_suite 00:03:58.088 Test: vtophys_malloc_test ...passed 00:03:58.088 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:58.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.088 EAL: Restoring previous memory policy: 4 00:03:58.088 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.088 EAL: request: mp_malloc_sync 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: Heap on socket 0 was expanded by 4MB 00:03:58.088 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.088 EAL: request: mp_malloc_sync 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: Heap on socket 0 was shrunk by 4MB 00:03:58.088 EAL: Trying to obtain current memory policy. 00:03:58.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.088 EAL: Restoring previous memory policy: 4 00:03:58.088 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.088 EAL: request: mp_malloc_sync 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: Heap on socket 0 was expanded by 6MB 00:03:58.088 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.088 EAL: request: mp_malloc_sync 00:03:58.088 EAL: No shared files mode enabled, IPC is disabled 00:03:58.088 EAL: Heap on socket 0 was shrunk by 6MB 00:03:58.088 EAL: Trying to obtain current memory policy. 00:03:58.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.089 EAL: Restoring previous memory policy: 4 00:03:58.089 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.089 EAL: request: mp_malloc_sync 00:03:58.089 EAL: No shared files mode enabled, IPC is disabled 00:03:58.089 EAL: Heap on socket 0 was expanded by 10MB 00:03:58.089 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.089 EAL: request: mp_malloc_sync 00:03:58.089 EAL: No shared files mode enabled, IPC is disabled 00:03:58.089 EAL: Heap on socket 0 was shrunk by 10MB 00:03:58.089 EAL: Trying to obtain current memory policy. 00:03:58.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.089 EAL: Restoring previous memory policy: 4 00:03:58.089 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.089 EAL: request: mp_malloc_sync 00:03:58.089 EAL: No shared files mode enabled, IPC is disabled 00:03:58.089 EAL: Heap on socket 0 was expanded by 18MB 00:03:58.089 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.089 EAL: request: mp_malloc_sync 00:03:58.089 EAL: No shared files mode enabled, IPC is disabled 00:03:58.089 EAL: Heap on socket 0 was shrunk by 18MB 00:03:58.089 EAL: Trying to obtain current memory policy. 00:03:58.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.089 EAL: Restoring previous memory policy: 4 00:03:58.089 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.089 EAL: request: mp_malloc_sync 00:03:58.089 EAL: No shared files mode enabled, IPC is disabled 00:03:58.089 EAL: Heap on socket 0 was expanded by 34MB 00:03:58.089 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.347 EAL: request: mp_malloc_sync 00:03:58.347 EAL: No shared files mode enabled, IPC is disabled 00:03:58.347 EAL: Heap on socket 0 was shrunk by 34MB 00:03:58.347 EAL: Trying to obtain current memory policy. 00:03:58.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.347 EAL: Restoring previous memory policy: 4 00:03:58.347 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.347 EAL: request: mp_malloc_sync 00:03:58.347 EAL: No shared files mode enabled, IPC is disabled 00:03:58.347 EAL: Heap on socket 0 was expanded by 66MB 00:03:58.347 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.347 EAL: request: mp_malloc_sync 00:03:58.347 EAL: No shared files mode enabled, IPC is disabled 00:03:58.347 EAL: Heap on socket 0 was shrunk by 66MB 00:03:58.347 EAL: Trying to obtain current memory policy. 00:03:58.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.347 EAL: Restoring previous memory policy: 4 00:03:58.347 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.347 EAL: request: mp_malloc_sync 00:03:58.347 EAL: No shared files mode enabled, IPC is disabled 00:03:58.347 EAL: Heap on socket 0 was expanded by 130MB 00:03:58.347 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.347 EAL: request: mp_malloc_sync 00:03:58.347 EAL: No shared files mode enabled, IPC is disabled 00:03:58.347 EAL: Heap on socket 0 was shrunk by 130MB 00:03:58.347 EAL: Trying to obtain current memory policy. 00:03:58.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.347 EAL: Restoring previous memory policy: 4 00:03:58.347 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.347 EAL: request: mp_malloc_sync 00:03:58.347 EAL: No shared files mode enabled, IPC is disabled 00:03:58.347 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.347 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.606 EAL: request: mp_malloc_sync 00:03:58.606 EAL: No shared files mode enabled, IPC is disabled 00:03:58.606 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.606 EAL: Trying to obtain current memory policy. 00:03:58.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.606 EAL: Restoring previous memory policy: 4 00:03:58.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.606 EAL: request: mp_malloc_sync 00:03:58.606 EAL: No shared files mode enabled, IPC is disabled 00:03:58.606 EAL: Heap on socket 0 was expanded by 514MB 00:03:58.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.863 EAL: request: mp_malloc_sync 00:03:58.863 EAL: No shared files mode enabled, IPC is disabled 00:03:58.863 EAL: Heap on socket 0 was shrunk by 514MB 00:03:58.863 EAL: Trying to obtain current memory policy. 00:03:58.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.120 EAL: Restoring previous memory policy: 4 00:03:59.120 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.120 EAL: request: mp_malloc_sync 00:03:59.120 EAL: No shared files mode enabled, IPC is disabled 00:03:59.120 EAL: Heap on socket 0 was expanded by 1026MB 00:03:59.377 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.635 EAL: request: mp_malloc_sync 00:03:59.635 EAL: No shared files mode enabled, IPC is disabled 00:03:59.635 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.635 passed 00:03:59.635 00:03:59.635 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.635 suites 1 1 n/a 0 0 00:03:59.635 tests 2 2 2 0 0 00:03:59.635 asserts 497 497 497 0 n/a 00:03:59.635 00:03:59.635 Elapsed time = 1.384 seconds 00:03:59.635 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.635 EAL: request: mp_malloc_sync 00:03:59.635 EAL: No shared files mode enabled, IPC is disabled 00:03:59.635 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.635 EAL: No shared files mode enabled, IPC is disabled 00:03:59.635 EAL: No shared files mode enabled, IPC is disabled 00:03:59.635 EAL: No shared files mode enabled, IPC is disabled 00:03:59.635 00:03:59.635 real 0m1.500s 00:03:59.635 user 0m0.871s 00:03:59.635 sys 0m0.600s 00:03:59.635 00:40:34 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.635 00:40:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:59.635 ************************************ 00:03:59.635 END TEST env_vtophys 00:03:59.635 ************************************ 00:03:59.635 00:40:34 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.635 00:40:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.635 00:40:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.635 00:40:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.635 00:40:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.635 ************************************ 00:03:59.635 START TEST env_pci 00:03:59.635 ************************************ 00:03:59.635 00:40:34 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.635 00:03:59.635 00:03:59.635 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.635 http://cunit.sourceforge.net/ 00:03:59.635 00:03:59.635 00:03:59.635 Suite: pci 00:03:59.635 Test: pci_hook ...[2024-07-16 00:40:34.303484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2516078 has claimed it 00:03:59.635 EAL: Cannot find device (10000:00:01.0) 00:03:59.635 EAL: Failed to attach device on primary process 00:03:59.635 passed 00:03:59.635 00:03:59.635 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.635 suites 1 1 n/a 0 0 00:03:59.635 tests 1 1 1 0 0 00:03:59.635 asserts 25 25 25 0 n/a 00:03:59.635 00:03:59.635 Elapsed time = 0.022 seconds 00:03:59.635 00:03:59.635 real 0m0.035s 00:03:59.635 user 0m0.011s 00:03:59.635 sys 0m0.024s 00:03:59.635 00:40:34 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.635 00:40:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:59.635 ************************************ 00:03:59.635 END TEST env_pci 00:03:59.635 ************************************ 00:03:59.635 00:40:34 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.635 00:40:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:59.635 00:40:34 env -- env/env.sh@15 -- # uname 00:03:59.635 00:40:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:59.635 00:40:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:59.635 00:40:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.635 00:40:34 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:59.635 00:40:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.635 00:40:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.635 ************************************ 00:03:59.635 START TEST env_dpdk_post_init 00:03:59.635 ************************************ 00:03:59.635 00:40:34 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.894 EAL: Detected CPU lcores: 48 00:03:59.894 EAL: Detected NUMA nodes: 2 00:03:59.894 EAL: Detected shared linkage of DPDK 00:03:59.894 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.894 EAL: Selected IOVA mode 'VA' 00:03:59.894 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.894 EAL: VFIO support initialized 00:03:59.894 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.894 EAL: Using IOMMU type 1 (Type 1) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:59.894 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:00.717 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:03.995 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:03.995 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:04.253 Starting DPDK initialization... 00:04:04.253 Starting SPDK post initialization... 00:04:04.253 SPDK NVMe probe 00:04:04.253 Attaching to 0000:88:00.0 00:04:04.253 Attached to 0000:88:00.0 00:04:04.253 Cleaning up... 00:04:04.253 00:04:04.253 real 0m4.387s 00:04:04.253 user 0m3.262s 00:04:04.253 sys 0m0.183s 00:04:04.253 00:40:38 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.253 00:40:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.253 ************************************ 00:04:04.253 END TEST env_dpdk_post_init 00:04:04.253 ************************************ 00:04:04.253 00:40:38 env -- common/autotest_common.sh@1142 -- # return 0 00:04:04.253 00:40:38 env -- env/env.sh@26 -- # uname 00:04:04.253 00:40:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:04.253 00:40:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.253 00:40:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.253 00:40:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.253 00:40:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.253 ************************************ 00:04:04.253 START TEST env_mem_callbacks 00:04:04.253 ************************************ 00:04:04.253 00:40:38 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.253 EAL: Detected CPU lcores: 48 00:04:04.253 EAL: Detected NUMA nodes: 2 00:04:04.253 EAL: Detected shared linkage of DPDK 00:04:04.253 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.253 EAL: Selected IOVA mode 'VA' 00:04:04.253 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.253 EAL: VFIO support initialized 00:04:04.253 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.253 00:04:04.253 00:04:04.253 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.253 http://cunit.sourceforge.net/ 00:04:04.253 00:04:04.253 00:04:04.253 Suite: memory 00:04:04.253 Test: test ... 00:04:04.253 register 0x200000200000 2097152 00:04:04.253 malloc 3145728 00:04:04.253 register 0x200000400000 4194304 00:04:04.253 buf 0x200000500000 len 3145728 PASSED 00:04:04.253 malloc 64 00:04:04.253 buf 0x2000004fff40 len 64 PASSED 00:04:04.253 malloc 4194304 00:04:04.253 register 0x200000800000 6291456 00:04:04.253 buf 0x200000a00000 len 4194304 PASSED 00:04:04.253 free 0x200000500000 3145728 00:04:04.253 free 0x2000004fff40 64 00:04:04.253 unregister 0x200000400000 4194304 PASSED 00:04:04.253 free 0x200000a00000 4194304 00:04:04.253 unregister 0x200000800000 6291456 PASSED 00:04:04.253 malloc 8388608 00:04:04.253 register 0x200000400000 10485760 00:04:04.253 buf 0x200000600000 len 8388608 PASSED 00:04:04.253 free 0x200000600000 8388608 00:04:04.253 unregister 0x200000400000 10485760 PASSED 00:04:04.253 passed 00:04:04.253 00:04:04.253 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.253 suites 1 1 n/a 0 0 00:04:04.253 tests 1 1 1 0 0 00:04:04.253 asserts 15 15 15 0 n/a 00:04:04.253 00:04:04.253 Elapsed time = 0.005 seconds 00:04:04.253 00:04:04.253 real 0m0.049s 00:04:04.253 user 0m0.016s 00:04:04.253 sys 0m0.032s 00:04:04.253 00:40:38 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.253 00:40:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:04.253 ************************************ 00:04:04.253 END TEST env_mem_callbacks 00:04:04.253 ************************************ 00:04:04.253 00:40:38 env -- common/autotest_common.sh@1142 -- # return 0 00:04:04.253 00:04:04.253 real 0m6.424s 00:04:04.253 user 0m4.425s 00:04:04.253 sys 0m1.046s 00:04:04.253 00:40:38 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.253 00:40:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.253 ************************************ 00:04:04.253 END TEST env 00:04:04.253 ************************************ 00:04:04.253 00:40:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.253 00:40:38 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:04.253 00:40:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.253 00:40:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.253 00:40:38 -- common/autotest_common.sh@10 -- # set +x 00:04:04.253 ************************************ 00:04:04.253 START TEST rpc 00:04:04.253 ************************************ 00:04:04.253 00:40:38 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:04.253 * Looking for test storage... 00:04:04.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.253 00:40:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2516730 00:04:04.253 00:40:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:04.253 00:40:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.253 00:40:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2516730 00:04:04.253 00:40:38 rpc -- common/autotest_common.sh@829 -- # '[' -z 2516730 ']' 00:04:04.253 00:40:38 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.254 00:40:38 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.254 00:40:38 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.254 00:40:38 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.254 00:40:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.511 [2024-07-16 00:40:39.031592] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:04.511 [2024-07-16 00:40:39.031696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516730 ] 00:04:04.511 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.511 [2024-07-16 00:40:39.089325] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.511 [2024-07-16 00:40:39.194738] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:04.511 [2024-07-16 00:40:39.194795] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2516730' to capture a snapshot of events at runtime. 00:04:04.511 [2024-07-16 00:40:39.194823] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:04.511 [2024-07-16 00:40:39.194834] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:04.511 [2024-07-16 00:40:39.194844] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2516730 for offline analysis/debug. 00:04:04.511 [2024-07-16 00:40:39.194870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.769 00:40:39 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:04.769 00:40:39 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:04.769 00:40:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.769 00:40:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.769 00:40:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:04.769 00:40:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:04.769 00:40:39 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.769 00:40:39 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.769 00:40:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.769 ************************************ 00:04:04.769 START TEST rpc_integrity 00:04:04.769 ************************************ 00:04:04.769 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:04.769 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:04.769 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.769 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.769 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.769 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:04.769 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.027 { 00:04:05.027 "name": "Malloc0", 00:04:05.027 "aliases": [ 00:04:05.027 "5cfbf9d3-4bb4-4556-a49a-47e7bafa8d33" 00:04:05.027 ], 00:04:05.027 "product_name": "Malloc disk", 00:04:05.027 "block_size": 512, 00:04:05.027 "num_blocks": 16384, 00:04:05.027 "uuid": "5cfbf9d3-4bb4-4556-a49a-47e7bafa8d33", 00:04:05.027 "assigned_rate_limits": { 00:04:05.027 "rw_ios_per_sec": 0, 00:04:05.027 "rw_mbytes_per_sec": 0, 00:04:05.027 "r_mbytes_per_sec": 0, 00:04:05.027 "w_mbytes_per_sec": 0 00:04:05.027 }, 00:04:05.027 "claimed": false, 00:04:05.027 "zoned": false, 00:04:05.027 "supported_io_types": { 00:04:05.027 "read": true, 00:04:05.027 "write": true, 00:04:05.027 "unmap": true, 00:04:05.027 "flush": true, 00:04:05.027 "reset": true, 00:04:05.027 "nvme_admin": false, 00:04:05.027 "nvme_io": false, 00:04:05.027 "nvme_io_md": false, 00:04:05.027 "write_zeroes": true, 00:04:05.027 "zcopy": true, 00:04:05.027 "get_zone_info": false, 00:04:05.027 "zone_management": false, 00:04:05.027 "zone_append": false, 00:04:05.027 "compare": false, 00:04:05.027 "compare_and_write": false, 00:04:05.027 "abort": true, 00:04:05.027 "seek_hole": false, 00:04:05.027 "seek_data": false, 00:04:05.027 "copy": true, 00:04:05.027 "nvme_iov_md": false 00:04:05.027 }, 00:04:05.027 "memory_domains": [ 00:04:05.027 { 00:04:05.027 "dma_device_id": "system", 00:04:05.027 "dma_device_type": 1 00:04:05.027 }, 00:04:05.027 { 00:04:05.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.027 "dma_device_type": 2 00:04:05.027 } 00:04:05.027 ], 00:04:05.027 "driver_specific": {} 00:04:05.027 } 00:04:05.027 ]' 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 [2024-07-16 00:40:39.589622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:05.027 [2024-07-16 00:40:39.589665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.027 [2024-07-16 00:40:39.589689] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24f2eb0 00:04:05.027 [2024-07-16 00:40:39.589711] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.027 [2024-07-16 00:40:39.591340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.027 [2024-07-16 00:40:39.591369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.027 Passthru0 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.027 { 00:04:05.027 "name": "Malloc0", 00:04:05.027 "aliases": [ 00:04:05.027 "5cfbf9d3-4bb4-4556-a49a-47e7bafa8d33" 00:04:05.027 ], 00:04:05.027 "product_name": "Malloc disk", 00:04:05.027 "block_size": 512, 00:04:05.027 "num_blocks": 16384, 00:04:05.027 "uuid": "5cfbf9d3-4bb4-4556-a49a-47e7bafa8d33", 00:04:05.027 "assigned_rate_limits": { 00:04:05.027 "rw_ios_per_sec": 0, 00:04:05.027 "rw_mbytes_per_sec": 0, 00:04:05.027 "r_mbytes_per_sec": 0, 00:04:05.027 "w_mbytes_per_sec": 0 00:04:05.027 }, 00:04:05.027 "claimed": true, 00:04:05.027 "claim_type": "exclusive_write", 00:04:05.027 "zoned": false, 00:04:05.027 "supported_io_types": { 00:04:05.027 "read": true, 00:04:05.027 "write": true, 00:04:05.027 "unmap": true, 00:04:05.027 "flush": true, 00:04:05.027 "reset": true, 00:04:05.027 "nvme_admin": false, 00:04:05.027 "nvme_io": false, 00:04:05.027 "nvme_io_md": false, 00:04:05.027 "write_zeroes": true, 00:04:05.027 "zcopy": true, 00:04:05.027 "get_zone_info": false, 00:04:05.027 "zone_management": false, 00:04:05.027 "zone_append": false, 00:04:05.027 "compare": false, 00:04:05.027 "compare_and_write": false, 00:04:05.027 "abort": true, 00:04:05.027 "seek_hole": false, 00:04:05.027 "seek_data": false, 00:04:05.027 "copy": true, 00:04:05.027 "nvme_iov_md": false 00:04:05.027 }, 00:04:05.027 "memory_domains": [ 00:04:05.027 { 00:04:05.027 "dma_device_id": "system", 00:04:05.027 "dma_device_type": 1 00:04:05.027 }, 00:04:05.027 { 00:04:05.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.027 "dma_device_type": 2 00:04:05.027 } 00:04:05.027 ], 00:04:05.027 "driver_specific": {} 00:04:05.027 }, 00:04:05.027 { 00:04:05.027 "name": "Passthru0", 00:04:05.027 "aliases": [ 00:04:05.027 "954725b4-5006-5a89-b57a-907239d041ab" 00:04:05.027 ], 00:04:05.027 "product_name": "passthru", 00:04:05.027 "block_size": 512, 00:04:05.027 "num_blocks": 16384, 00:04:05.027 "uuid": "954725b4-5006-5a89-b57a-907239d041ab", 00:04:05.027 "assigned_rate_limits": { 00:04:05.027 "rw_ios_per_sec": 0, 00:04:05.027 "rw_mbytes_per_sec": 0, 00:04:05.027 "r_mbytes_per_sec": 0, 00:04:05.027 "w_mbytes_per_sec": 0 00:04:05.027 }, 00:04:05.027 "claimed": false, 00:04:05.027 "zoned": false, 00:04:05.027 "supported_io_types": { 00:04:05.027 "read": true, 00:04:05.027 "write": true, 00:04:05.027 "unmap": true, 00:04:05.027 "flush": true, 00:04:05.027 "reset": true, 00:04:05.027 "nvme_admin": false, 00:04:05.027 "nvme_io": false, 00:04:05.027 "nvme_io_md": false, 00:04:05.027 "write_zeroes": true, 00:04:05.027 "zcopy": true, 00:04:05.027 "get_zone_info": false, 00:04:05.027 "zone_management": false, 00:04:05.027 "zone_append": false, 00:04:05.027 "compare": false, 00:04:05.027 "compare_and_write": false, 00:04:05.027 "abort": true, 00:04:05.027 "seek_hole": false, 00:04:05.027 "seek_data": false, 00:04:05.027 "copy": true, 00:04:05.027 "nvme_iov_md": false 00:04:05.027 }, 00:04:05.027 "memory_domains": [ 00:04:05.027 { 00:04:05.027 "dma_device_id": "system", 00:04:05.027 "dma_device_type": 1 00:04:05.027 }, 00:04:05.027 { 00:04:05.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.027 "dma_device_type": 2 00:04:05.027 } 00:04:05.027 ], 00:04:05.027 "driver_specific": { 00:04:05.027 "passthru": { 00:04:05.027 "name": "Passthru0", 00:04:05.027 "base_bdev_name": "Malloc0" 00:04:05.027 } 00:04:05.027 } 00:04:05.027 } 00:04:05.027 ]' 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.027 00:40:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.027 00:04:05.027 real 0m0.227s 00:04:05.027 user 0m0.140s 00:04:05.027 sys 0m0.030s 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.027 00:40:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.027 ************************************ 00:04:05.027 END TEST rpc_integrity 00:04:05.027 ************************************ 00:04:05.027 00:40:39 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:05.028 00:40:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:05.028 00:40:39 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.028 00:40:39 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.028 00:40:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.028 ************************************ 00:04:05.028 START TEST rpc_plugins 00:04:05.028 ************************************ 00:04:05.028 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:05.028 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:05.028 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.028 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.028 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.028 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:05.028 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:05.028 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.028 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.028 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.028 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:05.028 { 00:04:05.028 "name": "Malloc1", 00:04:05.028 "aliases": [ 00:04:05.028 "749850c0-4266-4915-8596-e18ac97dcb1b" 00:04:05.028 ], 00:04:05.028 "product_name": "Malloc disk", 00:04:05.028 "block_size": 4096, 00:04:05.028 "num_blocks": 256, 00:04:05.028 "uuid": "749850c0-4266-4915-8596-e18ac97dcb1b", 00:04:05.028 "assigned_rate_limits": { 00:04:05.028 "rw_ios_per_sec": 0, 00:04:05.028 "rw_mbytes_per_sec": 0, 00:04:05.028 "r_mbytes_per_sec": 0, 00:04:05.028 "w_mbytes_per_sec": 0 00:04:05.028 }, 00:04:05.028 "claimed": false, 00:04:05.028 "zoned": false, 00:04:05.028 "supported_io_types": { 00:04:05.028 "read": true, 00:04:05.028 "write": true, 00:04:05.028 "unmap": true, 00:04:05.028 "flush": true, 00:04:05.028 "reset": true, 00:04:05.028 "nvme_admin": false, 00:04:05.028 "nvme_io": false, 00:04:05.028 "nvme_io_md": false, 00:04:05.028 "write_zeroes": true, 00:04:05.028 "zcopy": true, 00:04:05.028 "get_zone_info": false, 00:04:05.028 "zone_management": false, 00:04:05.028 "zone_append": false, 00:04:05.028 "compare": false, 00:04:05.028 "compare_and_write": false, 00:04:05.028 "abort": true, 00:04:05.028 "seek_hole": false, 00:04:05.028 "seek_data": false, 00:04:05.028 "copy": true, 00:04:05.028 "nvme_iov_md": false 00:04:05.028 }, 00:04:05.028 "memory_domains": [ 00:04:05.028 { 00:04:05.028 "dma_device_id": "system", 00:04:05.028 "dma_device_type": 1 00:04:05.028 }, 00:04:05.028 { 00:04:05.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.028 "dma_device_type": 2 00:04:05.028 } 00:04:05.028 ], 00:04:05.028 "driver_specific": {} 00:04:05.028 } 00:04:05.028 ]' 00:04:05.028 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:05.286 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:05.286 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.286 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.286 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:05.286 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:05.286 00:40:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:05.286 00:04:05.286 real 0m0.117s 00:04:05.286 user 0m0.077s 00:04:05.286 sys 0m0.010s 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.286 00:40:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.286 ************************************ 00:04:05.286 END TEST rpc_plugins 00:04:05.286 ************************************ 00:04:05.286 00:40:39 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:05.286 00:40:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:05.286 00:40:39 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.286 00:40:39 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.286 00:40:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.286 ************************************ 00:04:05.286 START TEST rpc_trace_cmd_test 00:04:05.286 ************************************ 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:05.286 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2516730", 00:04:05.286 "tpoint_group_mask": "0x8", 00:04:05.286 "iscsi_conn": { 00:04:05.286 "mask": "0x2", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "scsi": { 00:04:05.286 "mask": "0x4", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "bdev": { 00:04:05.286 "mask": "0x8", 00:04:05.286 "tpoint_mask": "0xffffffffffffffff" 00:04:05.286 }, 00:04:05.286 "nvmf_rdma": { 00:04:05.286 "mask": "0x10", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "nvmf_tcp": { 00:04:05.286 "mask": "0x20", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "ftl": { 00:04:05.286 "mask": "0x40", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "blobfs": { 00:04:05.286 "mask": "0x80", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "dsa": { 00:04:05.286 "mask": "0x200", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "thread": { 00:04:05.286 "mask": "0x400", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "nvme_pcie": { 00:04:05.286 "mask": "0x800", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "iaa": { 00:04:05.286 "mask": "0x1000", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "nvme_tcp": { 00:04:05.286 "mask": "0x2000", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "bdev_nvme": { 00:04:05.286 "mask": "0x4000", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 }, 00:04:05.286 "sock": { 00:04:05.286 "mask": "0x8000", 00:04:05.286 "tpoint_mask": "0x0" 00:04:05.286 } 00:04:05.286 }' 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:05.286 00:40:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:05.286 00:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:05.286 00:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:05.286 00:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:05.286 00:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:05.544 00:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:05.544 00:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:05.544 00:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:05.544 00:04:05.544 real 0m0.195s 00:04:05.544 user 0m0.174s 00:04:05.544 sys 0m0.014s 00:04:05.544 00:40:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.544 00:40:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.544 ************************************ 00:04:05.544 END TEST rpc_trace_cmd_test 00:04:05.544 ************************************ 00:04:05.544 00:40:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:05.544 00:40:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:05.544 00:40:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:05.544 00:40:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:05.544 00:40:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.544 00:40:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.544 00:40:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.544 ************************************ 00:04:05.544 START TEST rpc_daemon_integrity 00:04:05.544 ************************************ 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.544 { 00:04:05.544 "name": "Malloc2", 00:04:05.544 "aliases": [ 00:04:05.544 "5bc449b6-dcbb-451b-9f40-0dbf2feff3dc" 00:04:05.544 ], 00:04:05.544 "product_name": "Malloc disk", 00:04:05.544 "block_size": 512, 00:04:05.544 "num_blocks": 16384, 00:04:05.544 "uuid": "5bc449b6-dcbb-451b-9f40-0dbf2feff3dc", 00:04:05.544 "assigned_rate_limits": { 00:04:05.544 "rw_ios_per_sec": 0, 00:04:05.544 "rw_mbytes_per_sec": 0, 00:04:05.544 "r_mbytes_per_sec": 0, 00:04:05.544 "w_mbytes_per_sec": 0 00:04:05.544 }, 00:04:05.544 "claimed": false, 00:04:05.544 "zoned": false, 00:04:05.544 "supported_io_types": { 00:04:05.544 "read": true, 00:04:05.544 "write": true, 00:04:05.544 "unmap": true, 00:04:05.544 "flush": true, 00:04:05.544 "reset": true, 00:04:05.544 "nvme_admin": false, 00:04:05.544 "nvme_io": false, 00:04:05.544 "nvme_io_md": false, 00:04:05.544 "write_zeroes": true, 00:04:05.544 "zcopy": true, 00:04:05.544 "get_zone_info": false, 00:04:05.544 "zone_management": false, 00:04:05.544 "zone_append": false, 00:04:05.544 "compare": false, 00:04:05.544 "compare_and_write": false, 00:04:05.544 "abort": true, 00:04:05.544 "seek_hole": false, 00:04:05.544 "seek_data": false, 00:04:05.544 "copy": true, 00:04:05.544 "nvme_iov_md": false 00:04:05.544 }, 00:04:05.544 "memory_domains": [ 00:04:05.544 { 00:04:05.544 "dma_device_id": "system", 00:04:05.544 "dma_device_type": 1 00:04:05.544 }, 00:04:05.544 { 00:04:05.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.544 "dma_device_type": 2 00:04:05.544 } 00:04:05.544 ], 00:04:05.544 "driver_specific": {} 00:04:05.544 } 00:04:05.544 ]' 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.544 [2024-07-16 00:40:40.268287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:05.544 [2024-07-16 00:40:40.268329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.544 [2024-07-16 00:40:40.268357] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24ebf40 00:04:05.544 [2024-07-16 00:40:40.268374] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.544 [2024-07-16 00:40:40.269799] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.544 [2024-07-16 00:40:40.269828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.544 Passthru0 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.544 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.544 { 00:04:05.544 "name": "Malloc2", 00:04:05.544 "aliases": [ 00:04:05.544 "5bc449b6-dcbb-451b-9f40-0dbf2feff3dc" 00:04:05.544 ], 00:04:05.544 "product_name": "Malloc disk", 00:04:05.544 "block_size": 512, 00:04:05.544 "num_blocks": 16384, 00:04:05.544 "uuid": "5bc449b6-dcbb-451b-9f40-0dbf2feff3dc", 00:04:05.544 "assigned_rate_limits": { 00:04:05.544 "rw_ios_per_sec": 0, 00:04:05.544 "rw_mbytes_per_sec": 0, 00:04:05.544 "r_mbytes_per_sec": 0, 00:04:05.544 "w_mbytes_per_sec": 0 00:04:05.544 }, 00:04:05.544 "claimed": true, 00:04:05.544 "claim_type": "exclusive_write", 00:04:05.544 "zoned": false, 00:04:05.544 "supported_io_types": { 00:04:05.544 "read": true, 00:04:05.544 "write": true, 00:04:05.544 "unmap": true, 00:04:05.544 "flush": true, 00:04:05.544 "reset": true, 00:04:05.544 "nvme_admin": false, 00:04:05.544 "nvme_io": false, 00:04:05.544 "nvme_io_md": false, 00:04:05.544 "write_zeroes": true, 00:04:05.544 "zcopy": true, 00:04:05.544 "get_zone_info": false, 00:04:05.544 "zone_management": false, 00:04:05.544 "zone_append": false, 00:04:05.544 "compare": false, 00:04:05.544 "compare_and_write": false, 00:04:05.544 "abort": true, 00:04:05.544 "seek_hole": false, 00:04:05.544 "seek_data": false, 00:04:05.544 "copy": true, 00:04:05.544 "nvme_iov_md": false 00:04:05.544 }, 00:04:05.544 "memory_domains": [ 00:04:05.544 { 00:04:05.544 "dma_device_id": "system", 00:04:05.544 "dma_device_type": 1 00:04:05.544 }, 00:04:05.544 { 00:04:05.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.544 "dma_device_type": 2 00:04:05.544 } 00:04:05.544 ], 00:04:05.544 "driver_specific": {} 00:04:05.544 }, 00:04:05.544 { 00:04:05.544 "name": "Passthru0", 00:04:05.544 "aliases": [ 00:04:05.544 "92fdff2c-c545-5c34-a1ba-ccf2ba33a7dc" 00:04:05.544 ], 00:04:05.544 "product_name": "passthru", 00:04:05.544 "block_size": 512, 00:04:05.544 "num_blocks": 16384, 00:04:05.544 "uuid": "92fdff2c-c545-5c34-a1ba-ccf2ba33a7dc", 00:04:05.544 "assigned_rate_limits": { 00:04:05.544 "rw_ios_per_sec": 0, 00:04:05.544 "rw_mbytes_per_sec": 0, 00:04:05.544 "r_mbytes_per_sec": 0, 00:04:05.544 "w_mbytes_per_sec": 0 00:04:05.544 }, 00:04:05.544 "claimed": false, 00:04:05.544 "zoned": false, 00:04:05.545 "supported_io_types": { 00:04:05.545 "read": true, 00:04:05.545 "write": true, 00:04:05.545 "unmap": true, 00:04:05.545 "flush": true, 00:04:05.545 "reset": true, 00:04:05.545 "nvme_admin": false, 00:04:05.545 "nvme_io": false, 00:04:05.545 "nvme_io_md": false, 00:04:05.545 "write_zeroes": true, 00:04:05.545 "zcopy": true, 00:04:05.545 "get_zone_info": false, 00:04:05.545 "zone_management": false, 00:04:05.545 "zone_append": false, 00:04:05.545 "compare": false, 00:04:05.545 "compare_and_write": false, 00:04:05.545 "abort": true, 00:04:05.545 "seek_hole": false, 00:04:05.545 "seek_data": false, 00:04:05.545 "copy": true, 00:04:05.545 "nvme_iov_md": false 00:04:05.545 }, 00:04:05.545 "memory_domains": [ 00:04:05.545 { 00:04:05.545 "dma_device_id": "system", 00:04:05.545 "dma_device_type": 1 00:04:05.545 }, 00:04:05.545 { 00:04:05.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.545 "dma_device_type": 2 00:04:05.545 } 00:04:05.545 ], 00:04:05.545 "driver_specific": { 00:04:05.545 "passthru": { 00:04:05.545 "name": "Passthru0", 00:04:05.545 "base_bdev_name": "Malloc2" 00:04:05.545 } 00:04:05.545 } 00:04:05.545 } 00:04:05.545 ]' 00:04:05.545 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.802 00:04:05.802 real 0m0.226s 00:04:05.802 user 0m0.154s 00:04:05.802 sys 0m0.020s 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.802 00:40:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.802 ************************************ 00:04:05.802 END TEST rpc_daemon_integrity 00:04:05.802 ************************************ 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:05.802 00:40:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:05.802 00:40:40 rpc -- rpc/rpc.sh@84 -- # killprocess 2516730 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@948 -- # '[' -z 2516730 ']' 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@952 -- # kill -0 2516730 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@953 -- # uname 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2516730 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2516730' 00:04:05.802 killing process with pid 2516730 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@967 -- # kill 2516730 00:04:05.802 00:40:40 rpc -- common/autotest_common.sh@972 -- # wait 2516730 00:04:06.366 00:04:06.366 real 0m1.946s 00:04:06.366 user 0m2.420s 00:04:06.366 sys 0m0.606s 00:04:06.366 00:40:40 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.366 00:40:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.366 ************************************ 00:04:06.366 END TEST rpc 00:04:06.366 ************************************ 00:04:06.366 00:40:40 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.366 00:40:40 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.366 00:40:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.366 00:40:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.366 00:40:40 -- common/autotest_common.sh@10 -- # set +x 00:04:06.366 ************************************ 00:04:06.366 START TEST skip_rpc 00:04:06.366 ************************************ 00:04:06.366 00:40:40 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.366 * Looking for test storage... 00:04:06.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.366 00:40:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.366 00:40:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.366 00:40:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.366 00:40:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.366 00:40:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.366 00:40:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.366 ************************************ 00:04:06.366 START TEST skip_rpc 00:04:06.366 ************************************ 00:04:06.366 00:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:06.366 00:40:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2517169 00:04:06.366 00:40:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.366 00:40:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.366 00:40:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.366 [2024-07-16 00:40:41.050394] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:06.366 [2024-07-16 00:40:41.050472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517169 ] 00:04:06.366 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.366 [2024-07-16 00:40:41.111731] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.624 [2024-07-16 00:40:41.227222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:11.881 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2517169 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2517169 ']' 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2517169 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2517169 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2517169' 00:04:11.882 killing process with pid 2517169 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2517169 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2517169 00:04:11.882 00:04:11.882 real 0m5.472s 00:04:11.882 user 0m5.164s 00:04:11.882 sys 0m0.313s 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.882 00:40:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.882 ************************************ 00:04:11.882 END TEST skip_rpc 00:04:11.882 ************************************ 00:04:11.882 00:40:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.882 00:40:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:11.882 00:40:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.882 00:40:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.882 00:40:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.882 ************************************ 00:04:11.882 START TEST skip_rpc_with_json 00:04:11.882 ************************************ 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2517862 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2517862 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2517862 ']' 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.882 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.882 [2024-07-16 00:40:46.569796] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:11.882 [2024-07-16 00:40:46.569908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517862 ] 00:04:11.882 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.882 [2024-07-16 00:40:46.627144] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.140 [2024-07-16 00:40:46.737320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.397 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.397 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:12.397 00:40:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:12.397 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.397 00:40:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.397 [2024-07-16 00:40:46.997605] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:12.397 request: 00:04:12.397 { 00:04:12.397 "trtype": "tcp", 00:04:12.397 "method": "nvmf_get_transports", 00:04:12.397 "req_id": 1 00:04:12.397 } 00:04:12.397 Got JSON-RPC error response 00:04:12.397 response: 00:04:12.397 { 00:04:12.397 "code": -19, 00:04:12.397 "message": "No such device" 00:04:12.397 } 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.397 [2024-07-16 00:40:47.005725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.397 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.655 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.655 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.655 { 00:04:12.655 "subsystems": [ 00:04:12.655 { 00:04:12.655 "subsystem": "vfio_user_target", 00:04:12.655 "config": null 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "subsystem": "keyring", 00:04:12.655 "config": [] 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "subsystem": "iobuf", 00:04:12.655 "config": [ 00:04:12.655 { 00:04:12.655 "method": "iobuf_set_options", 00:04:12.655 "params": { 00:04:12.655 "small_pool_count": 8192, 00:04:12.655 "large_pool_count": 1024, 00:04:12.655 "small_bufsize": 8192, 00:04:12.655 "large_bufsize": 135168 00:04:12.655 } 00:04:12.655 } 00:04:12.655 ] 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "subsystem": "sock", 00:04:12.655 "config": [ 00:04:12.655 { 00:04:12.655 "method": "sock_set_default_impl", 00:04:12.655 "params": { 00:04:12.655 "impl_name": "posix" 00:04:12.655 } 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "method": "sock_impl_set_options", 00:04:12.655 "params": { 00:04:12.655 "impl_name": "ssl", 00:04:12.655 "recv_buf_size": 4096, 00:04:12.655 "send_buf_size": 4096, 00:04:12.655 "enable_recv_pipe": true, 00:04:12.655 "enable_quickack": false, 00:04:12.655 "enable_placement_id": 0, 00:04:12.655 "enable_zerocopy_send_server": true, 00:04:12.655 "enable_zerocopy_send_client": false, 00:04:12.655 "zerocopy_threshold": 0, 00:04:12.655 "tls_version": 0, 00:04:12.655 "enable_ktls": false 00:04:12.655 } 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "method": "sock_impl_set_options", 00:04:12.655 "params": { 00:04:12.655 "impl_name": "posix", 00:04:12.655 "recv_buf_size": 2097152, 00:04:12.655 "send_buf_size": 2097152, 00:04:12.655 "enable_recv_pipe": true, 00:04:12.655 "enable_quickack": false, 00:04:12.655 "enable_placement_id": 0, 00:04:12.655 "enable_zerocopy_send_server": true, 00:04:12.655 "enable_zerocopy_send_client": false, 00:04:12.655 "zerocopy_threshold": 0, 00:04:12.655 "tls_version": 0, 00:04:12.655 "enable_ktls": false 00:04:12.655 } 00:04:12.655 } 00:04:12.655 ] 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "subsystem": "vmd", 00:04:12.655 "config": [] 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "subsystem": "accel", 00:04:12.655 "config": [ 00:04:12.655 { 00:04:12.655 "method": "accel_set_options", 00:04:12.655 "params": { 00:04:12.655 "small_cache_size": 128, 00:04:12.655 "large_cache_size": 16, 00:04:12.655 "task_count": 2048, 00:04:12.655 "sequence_count": 2048, 00:04:12.655 "buf_count": 2048 00:04:12.655 } 00:04:12.655 } 00:04:12.655 ] 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "subsystem": "bdev", 00:04:12.655 "config": [ 00:04:12.655 { 00:04:12.655 "method": "bdev_set_options", 00:04:12.655 "params": { 00:04:12.655 "bdev_io_pool_size": 65535, 00:04:12.655 "bdev_io_cache_size": 256, 00:04:12.655 "bdev_auto_examine": true, 00:04:12.655 "iobuf_small_cache_size": 128, 00:04:12.655 "iobuf_large_cache_size": 16 00:04:12.655 } 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "method": "bdev_raid_set_options", 00:04:12.655 "params": { 00:04:12.655 "process_window_size_kb": 1024 00:04:12.655 } 00:04:12.655 }, 00:04:12.655 { 00:04:12.655 "method": "bdev_iscsi_set_options", 00:04:12.655 "params": { 00:04:12.656 "timeout_sec": 30 00:04:12.656 } 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "method": "bdev_nvme_set_options", 00:04:12.656 "params": { 00:04:12.656 "action_on_timeout": "none", 00:04:12.656 "timeout_us": 0, 00:04:12.656 "timeout_admin_us": 0, 00:04:12.656 "keep_alive_timeout_ms": 10000, 00:04:12.656 "arbitration_burst": 0, 00:04:12.656 "low_priority_weight": 0, 00:04:12.656 "medium_priority_weight": 0, 00:04:12.656 "high_priority_weight": 0, 00:04:12.656 "nvme_adminq_poll_period_us": 10000, 00:04:12.656 "nvme_ioq_poll_period_us": 0, 00:04:12.656 "io_queue_requests": 0, 00:04:12.656 "delay_cmd_submit": true, 00:04:12.656 "transport_retry_count": 4, 00:04:12.656 "bdev_retry_count": 3, 00:04:12.656 "transport_ack_timeout": 0, 00:04:12.656 "ctrlr_loss_timeout_sec": 0, 00:04:12.656 "reconnect_delay_sec": 0, 00:04:12.656 "fast_io_fail_timeout_sec": 0, 00:04:12.656 "disable_auto_failback": false, 00:04:12.656 "generate_uuids": false, 00:04:12.656 "transport_tos": 0, 00:04:12.656 "nvme_error_stat": false, 00:04:12.656 "rdma_srq_size": 0, 00:04:12.656 "io_path_stat": false, 00:04:12.656 "allow_accel_sequence": false, 00:04:12.656 "rdma_max_cq_size": 0, 00:04:12.656 "rdma_cm_event_timeout_ms": 0, 00:04:12.656 "dhchap_digests": [ 00:04:12.656 "sha256", 00:04:12.656 "sha384", 00:04:12.656 "sha512" 00:04:12.656 ], 00:04:12.656 "dhchap_dhgroups": [ 00:04:12.656 "null", 00:04:12.656 "ffdhe2048", 00:04:12.656 "ffdhe3072", 00:04:12.656 "ffdhe4096", 00:04:12.656 "ffdhe6144", 00:04:12.656 "ffdhe8192" 00:04:12.656 ] 00:04:12.656 } 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "method": "bdev_nvme_set_hotplug", 00:04:12.656 "params": { 00:04:12.656 "period_us": 100000, 00:04:12.656 "enable": false 00:04:12.656 } 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "method": "bdev_wait_for_examine" 00:04:12.656 } 00:04:12.656 ] 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "scsi", 00:04:12.656 "config": null 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "scheduler", 00:04:12.656 "config": [ 00:04:12.656 { 00:04:12.656 "method": "framework_set_scheduler", 00:04:12.656 "params": { 00:04:12.656 "name": "static" 00:04:12.656 } 00:04:12.656 } 00:04:12.656 ] 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "vhost_scsi", 00:04:12.656 "config": [] 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "vhost_blk", 00:04:12.656 "config": [] 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "ublk", 00:04:12.656 "config": [] 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "nbd", 00:04:12.656 "config": [] 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "nvmf", 00:04:12.656 "config": [ 00:04:12.656 { 00:04:12.656 "method": "nvmf_set_config", 00:04:12.656 "params": { 00:04:12.656 "discovery_filter": "match_any", 00:04:12.656 "admin_cmd_passthru": { 00:04:12.656 "identify_ctrlr": false 00:04:12.656 } 00:04:12.656 } 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "method": "nvmf_set_max_subsystems", 00:04:12.656 "params": { 00:04:12.656 "max_subsystems": 1024 00:04:12.656 } 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "method": "nvmf_set_crdt", 00:04:12.656 "params": { 00:04:12.656 "crdt1": 0, 00:04:12.656 "crdt2": 0, 00:04:12.656 "crdt3": 0 00:04:12.656 } 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "method": "nvmf_create_transport", 00:04:12.656 "params": { 00:04:12.656 "trtype": "TCP", 00:04:12.656 "max_queue_depth": 128, 00:04:12.656 "max_io_qpairs_per_ctrlr": 127, 00:04:12.656 "in_capsule_data_size": 4096, 00:04:12.656 "max_io_size": 131072, 00:04:12.656 "io_unit_size": 131072, 00:04:12.656 "max_aq_depth": 128, 00:04:12.656 "num_shared_buffers": 511, 00:04:12.656 "buf_cache_size": 4294967295, 00:04:12.656 "dif_insert_or_strip": false, 00:04:12.656 "zcopy": false, 00:04:12.656 "c2h_success": true, 00:04:12.656 "sock_priority": 0, 00:04:12.656 "abort_timeout_sec": 1, 00:04:12.656 "ack_timeout": 0, 00:04:12.656 "data_wr_pool_size": 0 00:04:12.656 } 00:04:12.656 } 00:04:12.656 ] 00:04:12.656 }, 00:04:12.656 { 00:04:12.656 "subsystem": "iscsi", 00:04:12.656 "config": [ 00:04:12.656 { 00:04:12.656 "method": "iscsi_set_options", 00:04:12.656 "params": { 00:04:12.656 "node_base": "iqn.2016-06.io.spdk", 00:04:12.656 "max_sessions": 128, 00:04:12.656 "max_connections_per_session": 2, 00:04:12.656 "max_queue_depth": 64, 00:04:12.656 "default_time2wait": 2, 00:04:12.656 "default_time2retain": 20, 00:04:12.656 "first_burst_length": 8192, 00:04:12.656 "immediate_data": true, 00:04:12.656 "allow_duplicated_isid": false, 00:04:12.656 "error_recovery_level": 0, 00:04:12.656 "nop_timeout": 60, 00:04:12.656 "nop_in_interval": 30, 00:04:12.656 "disable_chap": false, 00:04:12.656 "require_chap": false, 00:04:12.656 "mutual_chap": false, 00:04:12.656 "chap_group": 0, 00:04:12.656 "max_large_datain_per_connection": 64, 00:04:12.656 "max_r2t_per_connection": 4, 00:04:12.656 "pdu_pool_size": 36864, 00:04:12.656 "immediate_data_pool_size": 16384, 00:04:12.656 "data_out_pool_size": 2048 00:04:12.656 } 00:04:12.656 } 00:04:12.656 ] 00:04:12.656 } 00:04:12.656 ] 00:04:12.656 } 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2517862 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2517862 ']' 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2517862 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2517862 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2517862' 00:04:12.656 killing process with pid 2517862 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2517862 00:04:12.656 00:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2517862 00:04:12.914 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2518002 00:04:12.914 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.914 00:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2518002 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2518002 ']' 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2518002 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518002 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518002' 00:04:18.231 killing process with pid 2518002 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2518002 00:04:18.231 00:40:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2518002 00:04:18.490 00:40:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.490 00:40:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.490 00:04:18.490 real 0m6.622s 00:04:18.490 user 0m6.221s 00:04:18.490 sys 0m0.682s 00:04:18.490 00:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.490 00:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.490 ************************************ 00:04:18.490 END TEST skip_rpc_with_json 00:04:18.490 ************************************ 00:04:18.490 00:40:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.490 00:40:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:18.490 00:40:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.490 00:40:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.490 00:40:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.490 ************************************ 00:04:18.490 START TEST skip_rpc_with_delay 00:04:18.490 ************************************ 00:04:18.490 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:18.490 00:40:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.490 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:18.491 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.491 [2024-07-16 00:40:53.243111] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:18.491 [2024-07-16 00:40:53.243228] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:18.748 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:18.748 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:18.748 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:18.748 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:18.748 00:04:18.748 real 0m0.068s 00:04:18.748 user 0m0.047s 00:04:18.748 sys 0m0.020s 00:04:18.748 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.748 00:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:18.748 ************************************ 00:04:18.748 END TEST skip_rpc_with_delay 00:04:18.748 ************************************ 00:04:18.748 00:40:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.748 00:40:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:18.748 00:40:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:18.748 00:40:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:18.748 00:40:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.748 00:40:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.748 00:40:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.748 ************************************ 00:04:18.748 START TEST exit_on_failed_rpc_init 00:04:18.748 ************************************ 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2518722 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2518722 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2518722 ']' 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.748 00:40:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.748 [2024-07-16 00:40:53.362704] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:18.749 [2024-07-16 00:40:53.362788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518722 ] 00:04:18.749 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.749 [2024-07-16 00:40:53.424288] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.005 [2024-07-16 00:40:53.540203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:19.569 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.826 [2024-07-16 00:40:54.355369] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:19.826 [2024-07-16 00:40:54.355446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518858 ] 00:04:19.826 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.826 [2024-07-16 00:40:54.415917] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.826 [2024-07-16 00:40:54.535639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.826 [2024-07-16 00:40:54.535773] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:19.826 [2024-07-16 00:40:54.535795] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:19.827 [2024-07-16 00:40:54.535808] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2518722 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2518722 ']' 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2518722 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518722 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:20.083 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:20.084 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518722' 00:04:20.084 killing process with pid 2518722 00:04:20.084 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2518722 00:04:20.084 00:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2518722 00:04:20.650 00:04:20.650 real 0m1.853s 00:04:20.650 user 0m2.230s 00:04:20.650 sys 0m0.486s 00:04:20.650 00:40:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.650 00:40:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.650 ************************************ 00:04:20.650 END TEST exit_on_failed_rpc_init 00:04:20.650 ************************************ 00:04:20.650 00:40:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.650 00:40:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.650 00:04:20.650 real 0m14.262s 00:04:20.650 user 0m13.764s 00:04:20.650 sys 0m1.665s 00:04:20.650 00:40:55 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.650 00:40:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.650 ************************************ 00:04:20.650 END TEST skip_rpc 00:04:20.650 ************************************ 00:04:20.650 00:40:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.650 00:40:55 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:20.650 00:40:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.650 00:40:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.650 00:40:55 -- common/autotest_common.sh@10 -- # set +x 00:04:20.650 ************************************ 00:04:20.650 START TEST rpc_client 00:04:20.650 ************************************ 00:04:20.650 00:40:55 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:20.650 * Looking for test storage... 00:04:20.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:20.650 00:40:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:20.650 OK 00:04:20.650 00:40:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:20.650 00:04:20.650 real 0m0.068s 00:04:20.650 user 0m0.028s 00:04:20.650 sys 0m0.046s 00:04:20.650 00:40:55 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.650 00:40:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:20.650 ************************************ 00:04:20.650 END TEST rpc_client 00:04:20.650 ************************************ 00:04:20.650 00:40:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.650 00:40:55 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:20.650 00:40:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.650 00:40:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.650 00:40:55 -- common/autotest_common.sh@10 -- # set +x 00:04:20.650 ************************************ 00:04:20.650 START TEST json_config 00:04:20.650 ************************************ 00:04:20.650 00:40:55 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.650 00:40:55 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.650 00:40:55 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.650 00:40:55 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.650 00:40:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.650 00:40:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.650 00:40:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.650 00:40:55 json_config -- paths/export.sh@5 -- # export PATH 00:04:20.650 00:40:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@47 -- # : 0 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:20.650 00:40:55 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:20.650 INFO: JSON configuration test init 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:20.650 00:40:55 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 00:40:55 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 00:40:55 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:20.909 00:40:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.909 00:40:55 json_config -- json_config/common.sh@10 -- # shift 00:04:20.909 00:40:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.909 00:40:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.909 00:40:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.909 00:40:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.909 00:40:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.909 00:40:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2519101 00:04:20.909 00:40:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:20.909 00:40:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.909 Waiting for target to run... 00:04:20.909 00:40:55 json_config -- json_config/common.sh@25 -- # waitforlisten 2519101 /var/tmp/spdk_tgt.sock 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@829 -- # '[' -z 2519101 ']' 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.909 00:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 [2024-07-16 00:40:55.462210] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:20.909 [2024-07-16 00:40:55.462311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519101 ] 00:04:20.909 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.477 [2024-07-16 00:40:55.965821] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.477 [2024-07-16 00:40:56.069255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.734 00:40:56 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.734 00:40:56 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:21.734 00:40:56 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.734 00:04:21.734 00:40:56 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:21.734 00:40:56 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:21.734 00:40:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.734 00:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.734 00:40:56 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:21.734 00:40:56 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:21.734 00:40:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.734 00:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.734 00:40:56 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:21.734 00:40:56 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:21.734 00:40:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:25.015 00:40:59 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:25.015 00:40:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:25.015 00:40:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.015 00:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.015 00:40:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:25.015 00:40:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:25.015 00:40:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:25.015 00:40:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:25.015 00:40:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:25.015 00:40:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:25.273 00:40:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.273 00:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:25.273 00:40:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.273 00:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:25.273 00:40:59 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:25.273 00:40:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:25.530 MallocForNvmf0 00:04:25.530 00:41:00 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:25.530 00:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:25.788 MallocForNvmf1 00:04:25.788 00:41:00 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:25.788 00:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.045 [2024-07-16 00:41:00.589561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.045 00:41:00 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:26.045 00:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:26.303 00:41:00 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:26.303 00:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:26.560 00:41:01 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:26.560 00:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:26.818 00:41:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:26.818 00:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.076 [2024-07-16 00:41:01.576792] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.076 00:41:01 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:27.076 00:41:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.076 00:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.076 00:41:01 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:27.076 00:41:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.076 00:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.076 00:41:01 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:27.076 00:41:01 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.076 00:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.334 MallocBdevForConfigChangeCheck 00:04:27.334 00:41:01 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:27.334 00:41:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.334 00:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.334 00:41:01 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:27.334 00:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.591 00:41:02 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:27.591 INFO: shutting down applications... 00:04:27.591 00:41:02 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:27.591 00:41:02 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:27.591 00:41:02 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:27.591 00:41:02 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.489 Calling clear_iscsi_subsystem 00:04:29.489 Calling clear_nvmf_subsystem 00:04:29.489 Calling clear_nbd_subsystem 00:04:29.489 Calling clear_ublk_subsystem 00:04:29.489 Calling clear_vhost_blk_subsystem 00:04:29.489 Calling clear_vhost_scsi_subsystem 00:04:29.489 Calling clear_bdev_subsystem 00:04:29.489 00:41:03 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:29.489 00:41:03 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:29.489 00:41:03 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:29.489 00:41:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.489 00:41:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.489 00:41:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.747 00:41:04 json_config -- json_config/json_config.sh@345 -- # break 00:04:29.747 00:41:04 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:29.747 00:41:04 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:29.747 00:41:04 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.747 00:41:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.747 00:41:04 json_config -- json_config/common.sh@35 -- # [[ -n 2519101 ]] 00:04:29.747 00:41:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2519101 00:04:29.747 00:41:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.747 00:41:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.747 00:41:04 json_config -- json_config/common.sh@41 -- # kill -0 2519101 00:04:29.747 00:41:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.315 00:41:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.315 00:41:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.315 00:41:04 json_config -- json_config/common.sh@41 -- # kill -0 2519101 00:04:30.315 00:41:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.315 00:41:04 json_config -- json_config/common.sh@43 -- # break 00:04:30.315 00:41:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.315 00:41:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.315 SPDK target shutdown done 00:04:30.315 00:41:04 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:30.315 INFO: relaunching applications... 00:04:30.315 00:41:04 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.315 00:41:04 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.315 00:41:04 json_config -- json_config/common.sh@10 -- # shift 00:04:30.315 00:41:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.315 00:41:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.315 00:41:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.315 00:41:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.315 00:41:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.315 00:41:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2520296 00:04:30.315 00:41:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.315 00:41:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.315 Waiting for target to run... 00:04:30.316 00:41:04 json_config -- json_config/common.sh@25 -- # waitforlisten 2520296 /var/tmp/spdk_tgt.sock 00:04:30.316 00:41:04 json_config -- common/autotest_common.sh@829 -- # '[' -z 2520296 ']' 00:04:30.316 00:41:04 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.316 00:41:04 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.316 00:41:04 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.316 00:41:04 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.316 00:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.316 [2024-07-16 00:41:04.967559] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:30.316 [2024-07-16 00:41:04.967654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520296 ] 00:04:30.316 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.574 [2024-07-16 00:41:05.326597] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.832 [2024-07-16 00:41:05.418750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.118 [2024-07-16 00:41:08.457939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.118 [2024-07-16 00:41:08.490399] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.118 00:41:08 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.118 00:41:08 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:34.118 00:41:08 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.118 00:04:34.118 00:41:08 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:34.118 00:41:08 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.118 INFO: Checking if target configuration is the same... 00:04:34.118 00:41:08 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.118 00:41:08 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:34.118 00:41:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.118 + '[' 2 -ne 2 ']' 00:04:34.118 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.118 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.118 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.118 +++ basename /dev/fd/62 00:04:34.118 ++ mktemp /tmp/62.XXX 00:04:34.118 + tmp_file_1=/tmp/62.iAI 00:04:34.118 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.118 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.118 + tmp_file_2=/tmp/spdk_tgt_config.json.fWR 00:04:34.118 + ret=0 00:04:34.118 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.377 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.377 + diff -u /tmp/62.iAI /tmp/spdk_tgt_config.json.fWR 00:04:34.377 + echo 'INFO: JSON config files are the same' 00:04:34.377 INFO: JSON config files are the same 00:04:34.377 + rm /tmp/62.iAI /tmp/spdk_tgt_config.json.fWR 00:04:34.377 + exit 0 00:04:34.377 00:41:08 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:34.377 00:41:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.377 INFO: changing configuration and checking if this can be detected... 00:04:34.377 00:41:08 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.377 00:41:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.635 00:41:09 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.635 00:41:09 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:34.635 00:41:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.635 + '[' 2 -ne 2 ']' 00:04:34.635 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.635 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.635 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.635 +++ basename /dev/fd/62 00:04:34.635 ++ mktemp /tmp/62.XXX 00:04:34.635 + tmp_file_1=/tmp/62.9gV 00:04:34.635 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.635 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.635 + tmp_file_2=/tmp/spdk_tgt_config.json.a7d 00:04:34.635 + ret=0 00:04:34.635 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.894 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.894 + diff -u /tmp/62.9gV /tmp/spdk_tgt_config.json.a7d 00:04:34.894 + ret=1 00:04:34.894 + echo '=== Start of file: /tmp/62.9gV ===' 00:04:34.894 + cat /tmp/62.9gV 00:04:34.894 + echo '=== End of file: /tmp/62.9gV ===' 00:04:34.894 + echo '' 00:04:34.894 + echo '=== Start of file: /tmp/spdk_tgt_config.json.a7d ===' 00:04:34.894 + cat /tmp/spdk_tgt_config.json.a7d 00:04:34.894 + echo '=== End of file: /tmp/spdk_tgt_config.json.a7d ===' 00:04:34.894 + echo '' 00:04:34.894 + rm /tmp/62.9gV /tmp/spdk_tgt_config.json.a7d 00:04:34.894 + exit 1 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:34.894 INFO: configuration change detected. 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:34.894 00:41:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.894 00:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@317 -- # [[ -n 2520296 ]] 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:34.894 00:41:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.894 00:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:34.894 00:41:09 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:34.894 00:41:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.894 00:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.219 00:41:09 json_config -- json_config/json_config.sh@323 -- # killprocess 2520296 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@948 -- # '[' -z 2520296 ']' 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@952 -- # kill -0 2520296 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@953 -- # uname 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2520296 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2520296' 00:04:35.219 killing process with pid 2520296 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@967 -- # kill 2520296 00:04:35.219 00:41:09 json_config -- common/autotest_common.sh@972 -- # wait 2520296 00:04:36.623 00:41:11 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.623 00:41:11 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:36.623 00:41:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.623 00:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.623 00:41:11 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:36.623 00:41:11 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:36.623 INFO: Success 00:04:36.623 00:04:36.623 real 0m16.012s 00:04:36.623 user 0m17.887s 00:04:36.623 sys 0m2.051s 00:04:36.623 00:41:11 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.623 00:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.623 ************************************ 00:04:36.623 END TEST json_config 00:04:36.623 ************************************ 00:04:36.882 00:41:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.883 00:41:11 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.883 00:41:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.883 00:41:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.883 00:41:11 -- common/autotest_common.sh@10 -- # set +x 00:04:36.883 ************************************ 00:04:36.883 START TEST json_config_extra_key 00:04:36.883 ************************************ 00:04:36.883 00:41:11 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:36.883 00:41:11 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.883 00:41:11 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.883 00:41:11 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.883 00:41:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.883 00:41:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.883 00:41:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.883 00:41:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.883 00:41:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:36.883 00:41:11 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.883 INFO: launching applications... 00:04:36.883 00:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2521208 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.883 Waiting for target to run... 00:04:36.883 00:41:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2521208 /var/tmp/spdk_tgt.sock 00:04:36.883 00:41:11 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2521208 ']' 00:04:36.883 00:41:11 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.883 00:41:11 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.883 00:41:11 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.883 00:41:11 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.883 00:41:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.883 [2024-07-16 00:41:11.513926] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:36.883 [2024-07-16 00:41:11.514013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521208 ] 00:04:36.883 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.451 [2024-07-16 00:41:12.008634] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.451 [2024-07-16 00:41:12.116310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.017 00:41:12 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.017 00:41:12 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:38.017 00:04:38.017 00:41:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:38.017 INFO: shutting down applications... 00:04:38.017 00:41:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2521208 ]] 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2521208 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2521208 00:04:38.017 00:41:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.276 00:41:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.276 00:41:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.276 00:41:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2521208 00:04:38.276 00:41:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.844 00:41:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.844 00:41:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.844 00:41:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2521208 00:04:38.844 00:41:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.844 00:41:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:38.844 00:41:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.844 00:41:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.844 SPDK target shutdown done 00:04:38.844 00:41:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:38.844 Success 00:04:38.844 00:04:38.844 real 0m2.097s 00:04:38.844 user 0m1.508s 00:04:38.844 sys 0m0.597s 00:04:38.844 00:41:13 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.844 00:41:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.844 ************************************ 00:04:38.844 END TEST json_config_extra_key 00:04:38.844 ************************************ 00:04:38.844 00:41:13 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.844 00:41:13 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.844 00:41:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.844 00:41:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.844 00:41:13 -- common/autotest_common.sh@10 -- # set +x 00:04:38.844 ************************************ 00:04:38.844 START TEST alias_rpc 00:04:38.844 ************************************ 00:04:38.844 00:41:13 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.844 * Looking for test storage... 00:04:38.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:38.844 00:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.844 00:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2521521 00:04:38.844 00:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.844 00:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2521521 00:04:38.844 00:41:13 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2521521 ']' 00:04:38.844 00:41:13 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.844 00:41:13 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.844 00:41:13 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.844 00:41:13 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.844 00:41:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.101 [2024-07-16 00:41:13.648589] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:39.101 [2024-07-16 00:41:13.648669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521521 ] 00:04:39.101 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.101 [2024-07-16 00:41:13.707224] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.101 [2024-07-16 00:41:13.827508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.034 00:41:14 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.034 00:41:14 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:40.034 00:41:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:40.291 00:41:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2521521 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2521521 ']' 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2521521 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2521521 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2521521' 00:04:40.291 killing process with pid 2521521 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@967 -- # kill 2521521 00:04:40.291 00:41:14 alias_rpc -- common/autotest_common.sh@972 -- # wait 2521521 00:04:40.856 00:04:40.857 real 0m1.801s 00:04:40.857 user 0m2.074s 00:04:40.857 sys 0m0.449s 00:04:40.857 00:41:15 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.857 00:41:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.857 ************************************ 00:04:40.857 END TEST alias_rpc 00:04:40.857 ************************************ 00:04:40.857 00:41:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.857 00:41:15 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:40.857 00:41:15 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.857 00:41:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.857 00:41:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.857 00:41:15 -- common/autotest_common.sh@10 -- # set +x 00:04:40.857 ************************************ 00:04:40.857 START TEST spdkcli_tcp 00:04:40.857 ************************************ 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.857 * Looking for test storage... 00:04:40.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2521720 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2521720 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2521720 ']' 00:04:40.857 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.857 00:41:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.857 [2024-07-16 00:41:15.503630] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:40.857 [2024-07-16 00:41:15.503709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521720 ] 00:04:40.857 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.857 [2024-07-16 00:41:15.559794] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.114 [2024-07-16 00:41:15.670184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.114 [2024-07-16 00:41:15.670188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.372 00:41:15 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.372 00:41:15 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:41.372 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2521844 00:04:41.372 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:41.372 00:41:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.630 [ 00:04:41.630 "bdev_malloc_delete", 00:04:41.630 "bdev_malloc_create", 00:04:41.630 "bdev_null_resize", 00:04:41.630 "bdev_null_delete", 00:04:41.630 "bdev_null_create", 00:04:41.630 "bdev_nvme_cuse_unregister", 00:04:41.630 "bdev_nvme_cuse_register", 00:04:41.630 "bdev_opal_new_user", 00:04:41.630 "bdev_opal_set_lock_state", 00:04:41.630 "bdev_opal_delete", 00:04:41.630 "bdev_opal_get_info", 00:04:41.630 "bdev_opal_create", 00:04:41.630 "bdev_nvme_opal_revert", 00:04:41.630 "bdev_nvme_opal_init", 00:04:41.630 "bdev_nvme_send_cmd", 00:04:41.630 "bdev_nvme_get_path_iostat", 00:04:41.630 "bdev_nvme_get_mdns_discovery_info", 00:04:41.630 "bdev_nvme_stop_mdns_discovery", 00:04:41.630 "bdev_nvme_start_mdns_discovery", 00:04:41.630 "bdev_nvme_set_multipath_policy", 00:04:41.630 "bdev_nvme_set_preferred_path", 00:04:41.630 "bdev_nvme_get_io_paths", 00:04:41.630 "bdev_nvme_remove_error_injection", 00:04:41.630 "bdev_nvme_add_error_injection", 00:04:41.630 "bdev_nvme_get_discovery_info", 00:04:41.630 "bdev_nvme_stop_discovery", 00:04:41.630 "bdev_nvme_start_discovery", 00:04:41.630 "bdev_nvme_get_controller_health_info", 00:04:41.630 "bdev_nvme_disable_controller", 00:04:41.630 "bdev_nvme_enable_controller", 00:04:41.630 "bdev_nvme_reset_controller", 00:04:41.630 "bdev_nvme_get_transport_statistics", 00:04:41.630 "bdev_nvme_apply_firmware", 00:04:41.630 "bdev_nvme_detach_controller", 00:04:41.630 "bdev_nvme_get_controllers", 00:04:41.630 "bdev_nvme_attach_controller", 00:04:41.630 "bdev_nvme_set_hotplug", 00:04:41.630 "bdev_nvme_set_options", 00:04:41.630 "bdev_passthru_delete", 00:04:41.630 "bdev_passthru_create", 00:04:41.630 "bdev_lvol_set_parent_bdev", 00:04:41.630 "bdev_lvol_set_parent", 00:04:41.630 "bdev_lvol_check_shallow_copy", 00:04:41.630 "bdev_lvol_start_shallow_copy", 00:04:41.630 "bdev_lvol_grow_lvstore", 00:04:41.630 "bdev_lvol_get_lvols", 00:04:41.630 "bdev_lvol_get_lvstores", 00:04:41.630 "bdev_lvol_delete", 00:04:41.630 "bdev_lvol_set_read_only", 00:04:41.630 "bdev_lvol_resize", 00:04:41.630 "bdev_lvol_decouple_parent", 00:04:41.630 "bdev_lvol_inflate", 00:04:41.630 "bdev_lvol_rename", 00:04:41.630 "bdev_lvol_clone_bdev", 00:04:41.630 "bdev_lvol_clone", 00:04:41.630 "bdev_lvol_snapshot", 00:04:41.630 "bdev_lvol_create", 00:04:41.630 "bdev_lvol_delete_lvstore", 00:04:41.630 "bdev_lvol_rename_lvstore", 00:04:41.630 "bdev_lvol_create_lvstore", 00:04:41.630 "bdev_raid_set_options", 00:04:41.630 "bdev_raid_remove_base_bdev", 00:04:41.630 "bdev_raid_add_base_bdev", 00:04:41.630 "bdev_raid_delete", 00:04:41.630 "bdev_raid_create", 00:04:41.630 "bdev_raid_get_bdevs", 00:04:41.630 "bdev_error_inject_error", 00:04:41.630 "bdev_error_delete", 00:04:41.630 "bdev_error_create", 00:04:41.630 "bdev_split_delete", 00:04:41.630 "bdev_split_create", 00:04:41.630 "bdev_delay_delete", 00:04:41.630 "bdev_delay_create", 00:04:41.630 "bdev_delay_update_latency", 00:04:41.630 "bdev_zone_block_delete", 00:04:41.630 "bdev_zone_block_create", 00:04:41.630 "blobfs_create", 00:04:41.630 "blobfs_detect", 00:04:41.630 "blobfs_set_cache_size", 00:04:41.630 "bdev_aio_delete", 00:04:41.630 "bdev_aio_rescan", 00:04:41.630 "bdev_aio_create", 00:04:41.630 "bdev_ftl_set_property", 00:04:41.630 "bdev_ftl_get_properties", 00:04:41.630 "bdev_ftl_get_stats", 00:04:41.630 "bdev_ftl_unmap", 00:04:41.630 "bdev_ftl_unload", 00:04:41.630 "bdev_ftl_delete", 00:04:41.630 "bdev_ftl_load", 00:04:41.630 "bdev_ftl_create", 00:04:41.630 "bdev_virtio_attach_controller", 00:04:41.630 "bdev_virtio_scsi_get_devices", 00:04:41.630 "bdev_virtio_detach_controller", 00:04:41.630 "bdev_virtio_blk_set_hotplug", 00:04:41.630 "bdev_iscsi_delete", 00:04:41.630 "bdev_iscsi_create", 00:04:41.630 "bdev_iscsi_set_options", 00:04:41.630 "accel_error_inject_error", 00:04:41.630 "ioat_scan_accel_module", 00:04:41.630 "dsa_scan_accel_module", 00:04:41.630 "iaa_scan_accel_module", 00:04:41.630 "vfu_virtio_create_scsi_endpoint", 00:04:41.630 "vfu_virtio_scsi_remove_target", 00:04:41.630 "vfu_virtio_scsi_add_target", 00:04:41.630 "vfu_virtio_create_blk_endpoint", 00:04:41.630 "vfu_virtio_delete_endpoint", 00:04:41.630 "keyring_file_remove_key", 00:04:41.630 "keyring_file_add_key", 00:04:41.630 "keyring_linux_set_options", 00:04:41.630 "iscsi_get_histogram", 00:04:41.630 "iscsi_enable_histogram", 00:04:41.630 "iscsi_set_options", 00:04:41.630 "iscsi_get_auth_groups", 00:04:41.630 "iscsi_auth_group_remove_secret", 00:04:41.630 "iscsi_auth_group_add_secret", 00:04:41.630 "iscsi_delete_auth_group", 00:04:41.630 "iscsi_create_auth_group", 00:04:41.630 "iscsi_set_discovery_auth", 00:04:41.630 "iscsi_get_options", 00:04:41.630 "iscsi_target_node_request_logout", 00:04:41.630 "iscsi_target_node_set_redirect", 00:04:41.630 "iscsi_target_node_set_auth", 00:04:41.630 "iscsi_target_node_add_lun", 00:04:41.630 "iscsi_get_stats", 00:04:41.630 "iscsi_get_connections", 00:04:41.630 "iscsi_portal_group_set_auth", 00:04:41.630 "iscsi_start_portal_group", 00:04:41.630 "iscsi_delete_portal_group", 00:04:41.631 "iscsi_create_portal_group", 00:04:41.631 "iscsi_get_portal_groups", 00:04:41.631 "iscsi_delete_target_node", 00:04:41.631 "iscsi_target_node_remove_pg_ig_maps", 00:04:41.631 "iscsi_target_node_add_pg_ig_maps", 00:04:41.631 "iscsi_create_target_node", 00:04:41.631 "iscsi_get_target_nodes", 00:04:41.631 "iscsi_delete_initiator_group", 00:04:41.631 "iscsi_initiator_group_remove_initiators", 00:04:41.631 "iscsi_initiator_group_add_initiators", 00:04:41.631 "iscsi_create_initiator_group", 00:04:41.631 "iscsi_get_initiator_groups", 00:04:41.631 "nvmf_set_crdt", 00:04:41.631 "nvmf_set_config", 00:04:41.631 "nvmf_set_max_subsystems", 00:04:41.631 "nvmf_stop_mdns_prr", 00:04:41.631 "nvmf_publish_mdns_prr", 00:04:41.631 "nvmf_subsystem_get_listeners", 00:04:41.631 "nvmf_subsystem_get_qpairs", 00:04:41.631 "nvmf_subsystem_get_controllers", 00:04:41.631 "nvmf_get_stats", 00:04:41.631 "nvmf_get_transports", 00:04:41.631 "nvmf_create_transport", 00:04:41.631 "nvmf_get_targets", 00:04:41.631 "nvmf_delete_target", 00:04:41.631 "nvmf_create_target", 00:04:41.631 "nvmf_subsystem_allow_any_host", 00:04:41.631 "nvmf_subsystem_remove_host", 00:04:41.631 "nvmf_subsystem_add_host", 00:04:41.631 "nvmf_ns_remove_host", 00:04:41.631 "nvmf_ns_add_host", 00:04:41.631 "nvmf_subsystem_remove_ns", 00:04:41.631 "nvmf_subsystem_add_ns", 00:04:41.631 "nvmf_subsystem_listener_set_ana_state", 00:04:41.631 "nvmf_discovery_get_referrals", 00:04:41.631 "nvmf_discovery_remove_referral", 00:04:41.631 "nvmf_discovery_add_referral", 00:04:41.631 "nvmf_subsystem_remove_listener", 00:04:41.631 "nvmf_subsystem_add_listener", 00:04:41.631 "nvmf_delete_subsystem", 00:04:41.631 "nvmf_create_subsystem", 00:04:41.631 "nvmf_get_subsystems", 00:04:41.631 "env_dpdk_get_mem_stats", 00:04:41.631 "nbd_get_disks", 00:04:41.631 "nbd_stop_disk", 00:04:41.631 "nbd_start_disk", 00:04:41.631 "ublk_recover_disk", 00:04:41.631 "ublk_get_disks", 00:04:41.631 "ublk_stop_disk", 00:04:41.631 "ublk_start_disk", 00:04:41.631 "ublk_destroy_target", 00:04:41.631 "ublk_create_target", 00:04:41.631 "virtio_blk_create_transport", 00:04:41.631 "virtio_blk_get_transports", 00:04:41.631 "vhost_controller_set_coalescing", 00:04:41.631 "vhost_get_controllers", 00:04:41.631 "vhost_delete_controller", 00:04:41.631 "vhost_create_blk_controller", 00:04:41.631 "vhost_scsi_controller_remove_target", 00:04:41.631 "vhost_scsi_controller_add_target", 00:04:41.631 "vhost_start_scsi_controller", 00:04:41.631 "vhost_create_scsi_controller", 00:04:41.631 "thread_set_cpumask", 00:04:41.631 "framework_get_governor", 00:04:41.631 "framework_get_scheduler", 00:04:41.631 "framework_set_scheduler", 00:04:41.631 "framework_get_reactors", 00:04:41.631 "thread_get_io_channels", 00:04:41.631 "thread_get_pollers", 00:04:41.631 "thread_get_stats", 00:04:41.631 "framework_monitor_context_switch", 00:04:41.631 "spdk_kill_instance", 00:04:41.631 "log_enable_timestamps", 00:04:41.631 "log_get_flags", 00:04:41.631 "log_clear_flag", 00:04:41.631 "log_set_flag", 00:04:41.631 "log_get_level", 00:04:41.631 "log_set_level", 00:04:41.631 "log_get_print_level", 00:04:41.631 "log_set_print_level", 00:04:41.631 "framework_enable_cpumask_locks", 00:04:41.631 "framework_disable_cpumask_locks", 00:04:41.631 "framework_wait_init", 00:04:41.631 "framework_start_init", 00:04:41.631 "scsi_get_devices", 00:04:41.631 "bdev_get_histogram", 00:04:41.631 "bdev_enable_histogram", 00:04:41.631 "bdev_set_qos_limit", 00:04:41.631 "bdev_set_qd_sampling_period", 00:04:41.631 "bdev_get_bdevs", 00:04:41.631 "bdev_reset_iostat", 00:04:41.631 "bdev_get_iostat", 00:04:41.631 "bdev_examine", 00:04:41.631 "bdev_wait_for_examine", 00:04:41.631 "bdev_set_options", 00:04:41.631 "notify_get_notifications", 00:04:41.631 "notify_get_types", 00:04:41.631 "accel_get_stats", 00:04:41.631 "accel_set_options", 00:04:41.631 "accel_set_driver", 00:04:41.631 "accel_crypto_key_destroy", 00:04:41.631 "accel_crypto_keys_get", 00:04:41.631 "accel_crypto_key_create", 00:04:41.631 "accel_assign_opc", 00:04:41.631 "accel_get_module_info", 00:04:41.631 "accel_get_opc_assignments", 00:04:41.631 "vmd_rescan", 00:04:41.631 "vmd_remove_device", 00:04:41.631 "vmd_enable", 00:04:41.631 "sock_get_default_impl", 00:04:41.631 "sock_set_default_impl", 00:04:41.631 "sock_impl_set_options", 00:04:41.631 "sock_impl_get_options", 00:04:41.631 "iobuf_get_stats", 00:04:41.631 "iobuf_set_options", 00:04:41.631 "keyring_get_keys", 00:04:41.631 "framework_get_pci_devices", 00:04:41.631 "framework_get_config", 00:04:41.631 "framework_get_subsystems", 00:04:41.631 "vfu_tgt_set_base_path", 00:04:41.631 "trace_get_info", 00:04:41.631 "trace_get_tpoint_group_mask", 00:04:41.631 "trace_disable_tpoint_group", 00:04:41.631 "trace_enable_tpoint_group", 00:04:41.631 "trace_clear_tpoint_mask", 00:04:41.631 "trace_set_tpoint_mask", 00:04:41.631 "spdk_get_version", 00:04:41.631 "rpc_get_methods" 00:04:41.631 ] 00:04:41.631 00:41:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.631 00:41:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:41.631 00:41:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2521720 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2521720 ']' 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2521720 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2521720 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2521720' 00:04:41.631 killing process with pid 2521720 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2521720 00:04:41.631 00:41:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2521720 00:04:42.197 00:04:42.197 real 0m1.303s 00:04:42.197 user 0m2.270s 00:04:42.197 sys 0m0.461s 00:04:42.197 00:41:16 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.197 00:41:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.197 ************************************ 00:04:42.197 END TEST spdkcli_tcp 00:04:42.197 ************************************ 00:04:42.197 00:41:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.197 00:41:16 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.197 00:41:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.197 00:41:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.197 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:42.197 ************************************ 00:04:42.197 START TEST dpdk_mem_utility 00:04:42.197 ************************************ 00:04:42.197 00:41:16 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.197 * Looking for test storage... 00:04:42.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:42.197 00:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:42.197 00:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2522021 00:04:42.197 00:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.197 00:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2522021 00:04:42.197 00:41:16 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2522021 ']' 00:04:42.197 00:41:16 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.197 00:41:16 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.197 00:41:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.197 00:41:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.197 00:41:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.197 [2024-07-16 00:41:16.850663] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:42.197 [2024-07-16 00:41:16.850766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522021 ] 00:04:42.197 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.197 [2024-07-16 00:41:16.911667] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.456 [2024-07-16 00:41:17.031103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.391 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.391 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:43.391 00:41:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:43.391 00:41:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:43.391 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.391 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.391 { 00:04:43.391 "filename": "/tmp/spdk_mem_dump.txt" 00:04:43.391 } 00:04:43.391 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.391 00:41:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.391 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:43.391 1 heaps totaling size 814.000000 MiB 00:04:43.391 size: 814.000000 MiB heap id: 0 00:04:43.391 end heaps---------- 00:04:43.391 8 mempools totaling size 598.116089 MiB 00:04:43.392 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:43.392 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:43.392 size: 84.521057 MiB name: bdev_io_2522021 00:04:43.392 size: 51.011292 MiB name: evtpool_2522021 00:04:43.392 size: 50.003479 MiB name: msgpool_2522021 00:04:43.392 size: 21.763794 MiB name: PDU_Pool 00:04:43.392 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:43.392 size: 0.026123 MiB name: Session_Pool 00:04:43.392 end mempools------- 00:04:43.392 6 memzones totaling size 4.142822 MiB 00:04:43.392 size: 1.000366 MiB name: RG_ring_0_2522021 00:04:43.392 size: 1.000366 MiB name: RG_ring_1_2522021 00:04:43.392 size: 1.000366 MiB name: RG_ring_4_2522021 00:04:43.392 size: 1.000366 MiB name: RG_ring_5_2522021 00:04:43.392 size: 0.125366 MiB name: RG_ring_2_2522021 00:04:43.392 size: 0.015991 MiB name: RG_ring_3_2522021 00:04:43.392 end memzones------- 00:04:43.392 00:41:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.392 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:43.392 list of free elements. size: 12.519348 MiB 00:04:43.392 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:43.392 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:43.392 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:43.392 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:43.392 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:43.392 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:43.392 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:43.392 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:43.392 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:43.392 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:43.392 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:43.392 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:43.392 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:43.392 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:43.392 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:43.392 list of standard malloc elements. size: 199.218079 MiB 00:04:43.392 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:43.392 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:43.392 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:43.392 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:43.392 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:43.392 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:43.392 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:43.392 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:43.392 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:43.392 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:43.392 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:43.392 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:43.392 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:43.392 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:43.392 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:43.392 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:43.392 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:43.392 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:43.392 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:43.392 list of memzone associated elements. size: 602.262573 MiB 00:04:43.392 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:43.392 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.392 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:43.392 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:43.392 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:43.392 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2522021_0 00:04:43.392 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:43.392 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2522021_0 00:04:43.392 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:43.392 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2522021_0 00:04:43.392 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:43.392 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.392 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:43.392 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.392 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:43.392 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2522021 00:04:43.392 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:43.392 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2522021 00:04:43.392 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:43.392 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2522021 00:04:43.392 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:43.392 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.392 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:43.392 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.392 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:43.392 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.392 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:43.392 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.392 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:43.392 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2522021 00:04:43.392 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:43.392 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2522021 00:04:43.392 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:43.392 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2522021 00:04:43.392 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:43.392 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2522021 00:04:43.392 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:43.392 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2522021 00:04:43.392 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:43.392 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.392 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:43.392 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.392 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:43.392 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.392 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:43.392 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2522021 00:04:43.392 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:43.392 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.392 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:43.392 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.392 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:43.392 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2522021 00:04:43.392 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:43.392 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.392 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:43.392 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2522021 00:04:43.392 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:43.392 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2522021 00:04:43.392 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:43.392 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.392 00:41:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.392 00:41:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2522021 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2522021 ']' 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2522021 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2522021 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2522021' 00:04:43.392 killing process with pid 2522021 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2522021 00:04:43.392 00:41:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2522021 00:04:43.959 00:04:43.959 real 0m1.674s 00:04:43.959 user 0m1.884s 00:04:43.959 sys 0m0.435s 00:04:43.959 00:41:18 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.959 00:41:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.959 ************************************ 00:04:43.959 END TEST dpdk_mem_utility 00:04:43.960 ************************************ 00:04:43.960 00:41:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.960 00:41:18 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.960 00:41:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.960 00:41:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.960 00:41:18 -- common/autotest_common.sh@10 -- # set +x 00:04:43.960 ************************************ 00:04:43.960 START TEST event 00:04:43.960 ************************************ 00:04:43.960 00:41:18 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.960 * Looking for test storage... 00:04:43.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:43.960 00:41:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:43.960 00:41:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.960 00:41:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.960 00:41:18 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:43.960 00:41:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.960 00:41:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.960 ************************************ 00:04:43.960 START TEST event_perf 00:04:43.960 ************************************ 00:04:43.960 00:41:18 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.960 Running I/O for 1 seconds...[2024-07-16 00:41:18.560464] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:43.960 [2024-07-16 00:41:18.560531] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522244 ] 00:04:43.960 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.960 [2024-07-16 00:41:18.620747] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.217 [2024-07-16 00:41:18.735248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.217 [2024-07-16 00:41:18.735308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.217 [2024-07-16 00:41:18.735373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.217 [2024-07-16 00:41:18.735376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.152 Running I/O for 1 seconds... 00:04:45.152 lcore 0: 228285 00:04:45.152 lcore 1: 228284 00:04:45.152 lcore 2: 228284 00:04:45.152 lcore 3: 228284 00:04:45.152 done. 00:04:45.152 00:04:45.152 real 0m1.314s 00:04:45.152 user 0m4.225s 00:04:45.152 sys 0m0.084s 00:04:45.152 00:41:19 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.152 00:41:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.152 ************************************ 00:04:45.152 END TEST event_perf 00:04:45.152 ************************************ 00:04:45.152 00:41:19 event -- common/autotest_common.sh@1142 -- # return 0 00:04:45.152 00:41:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:45.152 00:41:19 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:45.152 00:41:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.152 00:41:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.412 ************************************ 00:04:45.412 START TEST event_reactor 00:04:45.412 ************************************ 00:04:45.412 00:41:19 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:45.412 [2024-07-16 00:41:19.926632] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:45.412 [2024-07-16 00:41:19.926698] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522401 ] 00:04:45.412 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.412 [2024-07-16 00:41:19.989080] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.412 [2024-07-16 00:41:20.110690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.781 test_start 00:04:46.782 oneshot 00:04:46.782 tick 100 00:04:46.782 tick 100 00:04:46.782 tick 250 00:04:46.782 tick 100 00:04:46.782 tick 100 00:04:46.782 tick 250 00:04:46.782 tick 100 00:04:46.782 tick 500 00:04:46.782 tick 100 00:04:46.782 tick 100 00:04:46.782 tick 250 00:04:46.782 tick 100 00:04:46.782 tick 100 00:04:46.782 test_end 00:04:46.782 00:04:46.782 real 0m1.324s 00:04:46.782 user 0m1.244s 00:04:46.782 sys 0m0.075s 00:04:46.782 00:41:21 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.782 00:41:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.782 ************************************ 00:04:46.782 END TEST event_reactor 00:04:46.782 ************************************ 00:04:46.782 00:41:21 event -- common/autotest_common.sh@1142 -- # return 0 00:04:46.782 00:41:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.782 00:41:21 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:46.782 00:41:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.782 00:41:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.782 ************************************ 00:04:46.782 START TEST event_reactor_perf 00:04:46.782 ************************************ 00:04:46.782 00:41:21 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.782 [2024-07-16 00:41:21.298002] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:46.782 [2024-07-16 00:41:21.298081] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522585 ] 00:04:46.782 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.782 [2024-07-16 00:41:21.359757] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.782 [2024-07-16 00:41:21.476168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.154 test_start 00:04:48.154 test_end 00:04:48.154 Performance: 354256 events per second 00:04:48.154 00:04:48.154 real 0m1.310s 00:04:48.154 user 0m1.222s 00:04:48.154 sys 0m0.083s 00:04:48.154 00:41:22 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.154 00:41:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.154 ************************************ 00:04:48.154 END TEST event_reactor_perf 00:04:48.154 ************************************ 00:04:48.154 00:41:22 event -- common/autotest_common.sh@1142 -- # return 0 00:04:48.154 00:41:22 event -- event/event.sh@49 -- # uname -s 00:04:48.154 00:41:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:48.154 00:41:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:48.154 00:41:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.154 00:41:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.154 00:41:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.154 ************************************ 00:04:48.154 START TEST event_scheduler 00:04:48.154 ************************************ 00:04:48.154 00:41:22 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:48.154 * Looking for test storage... 00:04:48.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:48.154 00:41:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:48.155 00:41:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2522859 00:04:48.155 00:41:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:48.155 00:41:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.155 00:41:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2522859 00:04:48.155 00:41:22 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2522859 ']' 00:04:48.155 00:41:22 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.155 00:41:22 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.155 00:41:22 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.155 00:41:22 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.155 00:41:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.155 [2024-07-16 00:41:22.737173] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:48.155 [2024-07-16 00:41:22.737243] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522859 ] 00:04:48.155 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.155 [2024-07-16 00:41:22.796540] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.155 [2024-07-16 00:41:22.905364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.155 [2024-07-16 00:41:22.905430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.155 [2024-07-16 00:41:22.905469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.155 [2024-07-16 00:41:22.905473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.414 00:41:22 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.414 00:41:22 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:48.414 00:41:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.414 00:41:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 [2024-07-16 00:41:22.946352] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:48.414 [2024-07-16 00:41:22.946378] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.414 [2024-07-16 00:41:22.946412] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.414 [2024-07-16 00:41:22.946423] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.414 [2024-07-16 00:41:22.946433] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.414 00:41:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.414 00:41:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 [2024-07-16 00:41:23.039610] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.414 00:41:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.414 00:41:23 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.414 00:41:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 ************************************ 00:04:48.414 START TEST scheduler_create_thread 00:04:48.414 ************************************ 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 2 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 3 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 4 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 5 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 6 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 7 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 8 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.414 9 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.414 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.415 10 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.415 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.980 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.980 00:04:48.980 real 0m0.588s 00:04:48.980 user 0m0.010s 00:04:48.980 sys 0m0.003s 00:04:48.980 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.980 00:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.980 ************************************ 00:04:48.980 END TEST scheduler_create_thread 00:04:48.980 ************************************ 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:48.980 00:41:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:48.980 00:41:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2522859 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2522859 ']' 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2522859 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2522859 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2522859' 00:04:48.980 killing process with pid 2522859 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2522859 00:04:48.980 00:41:23 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2522859 00:04:49.544 [2024-07-16 00:41:24.131793] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.802 00:04:49.802 real 0m1.749s 00:04:49.802 user 0m2.184s 00:04:49.802 sys 0m0.315s 00:04:49.802 00:41:24 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.802 00:41:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.802 ************************************ 00:04:49.802 END TEST event_scheduler 00:04:49.802 ************************************ 00:04:49.802 00:41:24 event -- common/autotest_common.sh@1142 -- # return 0 00:04:49.802 00:41:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.802 00:41:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.802 00:41:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.802 00:41:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.802 00:41:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.802 ************************************ 00:04:49.802 START TEST app_repeat 00:04:49.802 ************************************ 00:04:49.802 00:41:24 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2523047 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2523047' 00:04:49.802 Process app_repeat pid: 2523047 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.802 spdk_app_start Round 0 00:04:49.802 00:41:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2523047 /var/tmp/spdk-nbd.sock 00:04:49.802 00:41:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2523047 ']' 00:04:49.802 00:41:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.802 00:41:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.802 00:41:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.802 00:41:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.802 00:41:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.802 [2024-07-16 00:41:24.476981] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:04:49.802 [2024-07-16 00:41:24.477045] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523047 ] 00:04:49.802 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.802 [2024-07-16 00:41:24.539870] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.060 [2024-07-16 00:41:24.657913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.060 [2024-07-16 00:41:24.657921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.060 00:41:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.060 00:41:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:50.060 00:41:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.318 Malloc0 00:04:50.318 00:41:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.577 Malloc1 00:04:50.577 00:41:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.577 00:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.875 /dev/nbd0 00:04:50.875 00:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.875 00:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:50.875 00:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:50.876 00:41:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.876 1+0 records in 00:04:50.876 1+0 records out 00:04:50.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148018 s, 27.7 MB/s 00:04:50.876 00:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.876 00:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:50.876 00:41:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.876 00:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:50.876 00:41:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:50.876 00:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.876 00:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.876 00:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.141 /dev/nbd1 00:04:51.141 00:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.141 00:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.141 1+0 records in 00:04:51.141 1+0 records out 00:04:51.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212622 s, 19.3 MB/s 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:51.141 00:41:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:51.141 00:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.141 00:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.141 00:41:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.141 00:41:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.141 00:41:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.398 { 00:04:51.398 "nbd_device": "/dev/nbd0", 00:04:51.398 "bdev_name": "Malloc0" 00:04:51.398 }, 00:04:51.398 { 00:04:51.398 "nbd_device": "/dev/nbd1", 00:04:51.398 "bdev_name": "Malloc1" 00:04:51.398 } 00:04:51.398 ]' 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.398 { 00:04:51.398 "nbd_device": "/dev/nbd0", 00:04:51.398 "bdev_name": "Malloc0" 00:04:51.398 }, 00:04:51.398 { 00:04:51.398 "nbd_device": "/dev/nbd1", 00:04:51.398 "bdev_name": "Malloc1" 00:04:51.398 } 00:04:51.398 ]' 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.398 /dev/nbd1' 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.398 /dev/nbd1' 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.398 256+0 records in 00:04:51.398 256+0 records out 00:04:51.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496332 s, 211 MB/s 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.398 00:41:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.655 256+0 records in 00:04:51.655 256+0 records out 00:04:51.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234684 s, 44.7 MB/s 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.655 256+0 records in 00:04:51.655 256+0 records out 00:04:51.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262052 s, 40.0 MB/s 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.655 00:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.912 00:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.170 00:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.427 00:41:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.427 00:41:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.685 00:41:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.942 [2024-07-16 00:41:27.608740] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.199 [2024-07-16 00:41:27.724998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.199 [2024-07-16 00:41:27.724998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.199 [2024-07-16 00:41:27.787174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.199 [2024-07-16 00:41:27.787268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.722 00:41:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.722 00:41:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:55.722 spdk_app_start Round 1 00:04:55.722 00:41:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2523047 /var/tmp/spdk-nbd.sock 00:04:55.722 00:41:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2523047 ']' 00:04:55.722 00:41:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.722 00:41:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.722 00:41:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.722 00:41:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.722 00:41:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.979 00:41:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.979 00:41:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:55.979 00:41:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.237 Malloc0 00:04:56.237 00:41:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.495 Malloc1 00:04:56.495 00:41:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.495 00:41:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.753 /dev/nbd0 00:04:56.753 00:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.753 00:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.753 1+0 records in 00:04:56.753 1+0 records out 00:04:56.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161663 s, 25.3 MB/s 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:56.753 00:41:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:56.753 00:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.753 00:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.753 00:41:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.011 /dev/nbd1 00:04:57.011 00:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.011 00:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.011 1+0 records in 00:04:57.011 1+0 records out 00:04:57.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184421 s, 22.2 MB/s 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:57.011 00:41:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:57.011 00:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.011 00:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.011 00:41:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.011 00:41:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.011 00:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.268 { 00:04:57.268 "nbd_device": "/dev/nbd0", 00:04:57.268 "bdev_name": "Malloc0" 00:04:57.268 }, 00:04:57.268 { 00:04:57.268 "nbd_device": "/dev/nbd1", 00:04:57.268 "bdev_name": "Malloc1" 00:04:57.268 } 00:04:57.268 ]' 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.268 { 00:04:57.268 "nbd_device": "/dev/nbd0", 00:04:57.268 "bdev_name": "Malloc0" 00:04:57.268 }, 00:04:57.268 { 00:04:57.268 "nbd_device": "/dev/nbd1", 00:04:57.268 "bdev_name": "Malloc1" 00:04:57.268 } 00:04:57.268 ]' 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.268 /dev/nbd1' 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.268 /dev/nbd1' 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.268 256+0 records in 00:04:57.268 256+0 records out 00:04:57.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501562 s, 209 MB/s 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.268 00:41:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.268 256+0 records in 00:04:57.268 256+0 records out 00:04:57.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243967 s, 43.0 MB/s 00:04:57.268 00:41:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.268 00:41:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.525 256+0 records in 00:04:57.525 256+0 records out 00:04:57.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262799 s, 39.9 MB/s 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.525 00:41:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.782 00:41:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.039 00:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.295 00:41:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.296 00:41:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.296 00:41:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.296 00:41:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.553 00:41:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.810 [2024-07-16 00:41:33.423525] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.810 [2024-07-16 00:41:33.538903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.810 [2024-07-16 00:41:33.538909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.067 [2024-07-16 00:41:33.602270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.067 [2024-07-16 00:41:33.602352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.593 00:41:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.593 00:41:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:01.593 spdk_app_start Round 2 00:05:01.593 00:41:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2523047 /var/tmp/spdk-nbd.sock 00:05:01.593 00:41:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2523047 ']' 00:05:01.593 00:41:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.593 00:41:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.593 00:41:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.593 00:41:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.593 00:41:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.850 00:41:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.850 00:41:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:01.850 00:41:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.108 Malloc0 00:05:02.108 00:41:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.367 Malloc1 00:05:02.367 00:41:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.367 00:41:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.624 /dev/nbd0 00:05:02.624 00:41:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.624 00:41:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.624 1+0 records in 00:05:02.624 1+0 records out 00:05:02.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165381 s, 24.8 MB/s 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.624 00:41:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.624 00:41:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.624 00:41:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.624 00:41:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.882 /dev/nbd1 00:05:02.882 00:41:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.882 00:41:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.882 1+0 records in 00:05:02.882 1+0 records out 00:05:02.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204226 s, 20.1 MB/s 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.882 00:41:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.882 00:41:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.882 00:41:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.882 00:41:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.882 00:41:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.882 00:41:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.140 { 00:05:03.140 "nbd_device": "/dev/nbd0", 00:05:03.140 "bdev_name": "Malloc0" 00:05:03.140 }, 00:05:03.140 { 00:05:03.140 "nbd_device": "/dev/nbd1", 00:05:03.140 "bdev_name": "Malloc1" 00:05:03.140 } 00:05:03.140 ]' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.140 { 00:05:03.140 "nbd_device": "/dev/nbd0", 00:05:03.140 "bdev_name": "Malloc0" 00:05:03.140 }, 00:05:03.140 { 00:05:03.140 "nbd_device": "/dev/nbd1", 00:05:03.140 "bdev_name": "Malloc1" 00:05:03.140 } 00:05:03.140 ]' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.140 /dev/nbd1' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.140 /dev/nbd1' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.140 256+0 records in 00:05:03.140 256+0 records out 00:05:03.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501344 s, 209 MB/s 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.140 256+0 records in 00:05:03.140 256+0 records out 00:05:03.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242934 s, 43.2 MB/s 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.140 256+0 records in 00:05:03.140 256+0 records out 00:05:03.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261161 s, 40.2 MB/s 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.140 00:41:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.397 00:41:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.655 00:41:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.912 00:41:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.912 00:41:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.912 00:41:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.170 00:41:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.170 00:41:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.427 00:41:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.685 [2024-07-16 00:41:39.226953] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.685 [2024-07-16 00:41:39.342615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.685 [2024-07-16 00:41:39.342644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.685 [2024-07-16 00:41:39.402166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.685 [2024-07-16 00:41:39.402253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.212 00:41:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2523047 /var/tmp/spdk-nbd.sock 00:05:07.212 00:41:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2523047 ']' 00:05:07.212 00:41:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.212 00:41:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.212 00:41:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.212 00:41:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.212 00:41:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.478 00:41:42 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.478 00:41:42 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.478 00:41:42 event.app_repeat -- event/event.sh@39 -- # killprocess 2523047 00:05:07.478 00:41:42 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2523047 ']' 00:05:07.478 00:41:42 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2523047 00:05:07.478 00:41:42 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:07.478 00:41:42 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.478 00:41:42 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2523047 00:05:07.780 00:41:42 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.780 00:41:42 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.780 00:41:42 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2523047' 00:05:07.780 killing process with pid 2523047 00:05:07.780 00:41:42 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2523047 00:05:07.780 00:41:42 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2523047 00:05:07.780 spdk_app_start is called in Round 0. 00:05:07.780 Shutdown signal received, stop current app iteration 00:05:07.780 Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 reinitialization... 00:05:07.780 spdk_app_start is called in Round 1. 00:05:07.780 Shutdown signal received, stop current app iteration 00:05:07.780 Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 reinitialization... 00:05:07.780 spdk_app_start is called in Round 2. 00:05:07.780 Shutdown signal received, stop current app iteration 00:05:07.780 Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 reinitialization... 00:05:07.780 spdk_app_start is called in Round 3. 00:05:07.780 Shutdown signal received, stop current app iteration 00:05:07.780 00:41:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.780 00:41:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.780 00:05:07.780 real 0m18.048s 00:05:07.780 user 0m38.859s 00:05:07.780 sys 0m3.301s 00:05:07.780 00:41:42 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.780 00:41:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.780 ************************************ 00:05:07.780 END TEST app_repeat 00:05:07.780 ************************************ 00:05:07.780 00:41:42 event -- common/autotest_common.sh@1142 -- # return 0 00:05:07.780 00:41:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.780 00:41:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.780 00:41:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.780 00:41:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.780 00:41:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.044 ************************************ 00:05:08.044 START TEST cpu_locks 00:05:08.044 ************************************ 00:05:08.044 00:41:42 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:08.044 * Looking for test storage... 00:05:08.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:08.044 00:41:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:08.044 00:41:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:08.044 00:41:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:08.044 00:41:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:08.044 00:41:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.044 00:41:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.044 00:41:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.044 ************************************ 00:05:08.044 START TEST default_locks 00:05:08.044 ************************************ 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2525402 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2525402 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2525402 ']' 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.044 00:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.044 [2024-07-16 00:41:42.675853] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:08.044 [2024-07-16 00:41:42.675959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525402 ] 00:05:08.044 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.044 [2024-07-16 00:41:42.737078] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.303 [2024-07-16 00:41:42.851846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.561 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.561 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:08.561 00:41:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2525402 00:05:08.561 00:41:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2525402 00:05:08.561 00:41:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.818 lslocks: write error 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2525402 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2525402 ']' 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2525402 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525402 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525402' 00:05:08.818 killing process with pid 2525402 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2525402 00:05:08.818 00:41:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2525402 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2525402 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2525402 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2525402 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2525402 ']' 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2525402) - No such process 00:05:09.383 ERROR: process (pid: 2525402) is no longer running 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.383 00:41:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.384 00:41:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.384 00:05:09.384 real 0m1.390s 00:05:09.384 user 0m1.332s 00:05:09.384 sys 0m0.556s 00:05:09.384 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.384 00:41:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.384 ************************************ 00:05:09.384 END TEST default_locks 00:05:09.384 ************************************ 00:05:09.384 00:41:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:09.384 00:41:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:09.384 00:41:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.384 00:41:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.384 00:41:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.384 ************************************ 00:05:09.384 START TEST default_locks_via_rpc 00:05:09.384 ************************************ 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2525692 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2525692 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2525692 ']' 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.384 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.384 [2024-07-16 00:41:44.117146] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:09.384 [2024-07-16 00:41:44.117264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525692 ] 00:05:09.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.642 [2024-07-16 00:41:44.177441] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.642 [2024-07-16 00:41:44.286985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2525692 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2525692 00:05:09.899 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2525692 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2525692 ']' 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2525692 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525692 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525692' 00:05:10.157 killing process with pid 2525692 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2525692 00:05:10.157 00:41:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2525692 00:05:10.721 00:05:10.721 real 0m1.293s 00:05:10.721 user 0m1.221s 00:05:10.721 sys 0m0.524s 00:05:10.721 00:41:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.721 00:41:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.721 ************************************ 00:05:10.721 END TEST default_locks_via_rpc 00:05:10.721 ************************************ 00:05:10.721 00:41:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.721 00:41:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:10.722 00:41:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.722 00:41:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.722 00:41:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.722 ************************************ 00:05:10.722 START TEST non_locking_app_on_locked_coremask 00:05:10.722 ************************************ 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2525852 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2525852 /var/tmp/spdk.sock 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2525852 ']' 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.722 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.722 [2024-07-16 00:41:45.453826] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:10.722 [2024-07-16 00:41:45.453948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525852 ] 00:05:10.979 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.979 [2024-07-16 00:41:45.512845] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.979 [2024-07-16 00:41:45.622634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2525861 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2525861 /var/tmp/spdk2.sock 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2525861 ']' 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.236 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.237 00:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.237 [2024-07-16 00:41:45.935843] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:11.237 [2024-07-16 00:41:45.935959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525861 ] 00:05:11.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.495 [2024-07-16 00:41:46.029442] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.495 [2024-07-16 00:41:46.029475] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.752 [2024-07-16 00:41:46.264702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.317 00:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.317 00:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.317 00:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2525852 00:05:12.317 00:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2525852 00:05:12.317 00:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.882 lslocks: write error 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2525852 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2525852 ']' 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2525852 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525852 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525852' 00:05:12.882 killing process with pid 2525852 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2525852 00:05:12.882 00:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2525852 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2525861 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2525861 ']' 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2525861 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525861 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525861' 00:05:13.817 killing process with pid 2525861 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2525861 00:05:13.817 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2525861 00:05:14.384 00:05:14.384 real 0m3.516s 00:05:14.384 user 0m3.648s 00:05:14.384 sys 0m1.096s 00:05:14.384 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.384 00:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.384 ************************************ 00:05:14.384 END TEST non_locking_app_on_locked_coremask 00:05:14.384 ************************************ 00:05:14.384 00:41:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:14.384 00:41:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:14.384 00:41:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.384 00:41:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.384 00:41:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.384 ************************************ 00:05:14.384 START TEST locking_app_on_unlocked_coremask 00:05:14.384 ************************************ 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2526292 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2526292 /var/tmp/spdk.sock 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2526292 ']' 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.384 00:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.384 [2024-07-16 00:41:49.021519] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:14.384 [2024-07-16 00:41:49.021607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526292 ] 00:05:14.384 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.384 [2024-07-16 00:41:49.082788] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.384 [2024-07-16 00:41:49.082835] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.643 [2024-07-16 00:41:49.195827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2526430 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2526430 /var/tmp/spdk2.sock 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2526430 ']' 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.210 00:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.468 [2024-07-16 00:41:50.003802] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:15.468 [2024-07-16 00:41:50.003913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526430 ] 00:05:15.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.468 [2024-07-16 00:41:50.103169] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.739 [2024-07-16 00:41:50.341127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.306 00:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.306 00:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:16.306 00:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2526430 00:05:16.306 00:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2526430 00:05:16.306 00:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.872 lslocks: write error 00:05:16.872 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2526292 00:05:16.872 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2526292 ']' 00:05:16.872 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2526292 00:05:16.872 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:16.873 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.873 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526292 00:05:16.873 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.873 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.873 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526292' 00:05:16.873 killing process with pid 2526292 00:05:16.873 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2526292 00:05:16.873 00:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2526292 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2526430 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2526430 ']' 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2526430 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526430 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526430' 00:05:17.808 killing process with pid 2526430 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2526430 00:05:17.808 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2526430 00:05:18.375 00:05:18.376 real 0m3.939s 00:05:18.376 user 0m4.261s 00:05:18.376 sys 0m1.124s 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.376 ************************************ 00:05:18.376 END TEST locking_app_on_unlocked_coremask 00:05:18.376 ************************************ 00:05:18.376 00:41:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.376 00:41:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:18.376 00:41:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.376 00:41:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.376 00:41:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.376 ************************************ 00:05:18.376 START TEST locking_app_on_locked_coremask 00:05:18.376 ************************************ 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2526859 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2526859 /var/tmp/spdk.sock 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2526859 ']' 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.376 00:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.376 [2024-07-16 00:41:52.999712] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:18.376 [2024-07-16 00:41:52.999785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526859 ] 00:05:18.376 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.376 [2024-07-16 00:41:53.061425] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.635 [2024-07-16 00:41:53.183023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2526864 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2526864 /var/tmp/spdk2.sock 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2526864 /var/tmp/spdk2.sock 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2526864 /var/tmp/spdk2.sock 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2526864 ']' 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.894 00:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.894 [2024-07-16 00:41:53.500221] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:18.894 [2024-07-16 00:41:53.500306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526864 ] 00:05:18.894 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.894 [2024-07-16 00:41:53.596602] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2526859 has claimed it. 00:05:18.894 [2024-07-16 00:41:53.596667] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2526864) - No such process 00:05:19.460 ERROR: process (pid: 2526864) is no longer running 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2526859 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2526859 00:05:19.460 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.026 lslocks: write error 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2526859 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2526859 ']' 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2526859 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526859 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526859' 00:05:20.026 killing process with pid 2526859 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2526859 00:05:20.026 00:41:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2526859 00:05:20.593 00:05:20.593 real 0m2.148s 00:05:20.593 user 0m2.329s 00:05:20.593 sys 0m0.650s 00:05:20.593 00:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.593 00:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.593 ************************************ 00:05:20.593 END TEST locking_app_on_locked_coremask 00:05:20.593 ************************************ 00:05:20.593 00:41:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.593 00:41:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.593 00:41:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.593 00:41:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.593 00:41:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.593 ************************************ 00:05:20.593 START TEST locking_overlapped_coremask 00:05:20.593 ************************************ 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2527133 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2527133 /var/tmp/spdk.sock 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2527133 ']' 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.593 00:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.593 [2024-07-16 00:41:55.202683] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:20.593 [2024-07-16 00:41:55.202781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527133 ] 00:05:20.593 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.593 [2024-07-16 00:41:55.283767] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.852 [2024-07-16 00:41:55.401939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.852 [2024-07-16 00:41:55.402009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.852 [2024-07-16 00:41:55.402013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2527172 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2527172 /var/tmp/spdk2.sock 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2527172 /var/tmp/spdk2.sock 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2527172 /var/tmp/spdk2.sock 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2527172 ']' 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.418 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.678 [2024-07-16 00:41:56.180666] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:21.678 [2024-07-16 00:41:56.180742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527172 ] 00:05:21.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.678 [2024-07-16 00:41:56.275270] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2527133 has claimed it. 00:05:21.678 [2024-07-16 00:41:56.275339] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2527172) - No such process 00:05:22.242 ERROR: process (pid: 2527172) is no longer running 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2527133 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2527133 ']' 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2527133 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2527133 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2527133' 00:05:22.242 killing process with pid 2527133 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2527133 00:05:22.242 00:41:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2527133 00:05:22.844 00:05:22.844 real 0m2.241s 00:05:22.844 user 0m6.224s 00:05:22.844 sys 0m0.525s 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.844 ************************************ 00:05:22.844 END TEST locking_overlapped_coremask 00:05:22.844 ************************************ 00:05:22.844 00:41:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:22.844 00:41:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.844 00:41:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.844 00:41:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.844 00:41:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.844 ************************************ 00:05:22.844 START TEST locking_overlapped_coremask_via_rpc 00:05:22.844 ************************************ 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2527453 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2527453 /var/tmp/spdk.sock 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2527453 ']' 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.844 00:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.844 [2024-07-16 00:41:57.491622] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:22.844 [2024-07-16 00:41:57.491702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527453 ] 00:05:22.844 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.844 [2024-07-16 00:41:57.558500] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.844 [2024-07-16 00:41:57.558540] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.102 [2024-07-16 00:41:57.676248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.102 [2024-07-16 00:41:57.676303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.102 [2024-07-16 00:41:57.676308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2527478 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2527478 /var/tmp/spdk2.sock 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2527478 ']' 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.668 00:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.926 [2024-07-16 00:41:58.478007] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:23.926 [2024-07-16 00:41:58.478099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527478 ] 00:05:23.926 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.926 [2024-07-16 00:41:58.571398] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.926 [2024-07-16 00:41:58.571438] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.184 [2024-07-16 00:41:58.789123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.184 [2024-07-16 00:41:58.792935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.184 [2024-07-16 00:41:58.792938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.751 [2024-07-16 00:41:59.448003] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2527453 has claimed it. 00:05:24.751 request: 00:05:24.751 { 00:05:24.751 "method": "framework_enable_cpumask_locks", 00:05:24.751 "req_id": 1 00:05:24.751 } 00:05:24.751 Got JSON-RPC error response 00:05:24.751 response: 00:05:24.751 { 00:05:24.751 "code": -32603, 00:05:24.751 "message": "Failed to claim CPU core: 2" 00:05:24.751 } 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2527453 /var/tmp/spdk.sock 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2527453 ']' 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.751 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2527478 /var/tmp/spdk2.sock 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2527478 ']' 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.008 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.266 00:05:25.266 real 0m2.541s 00:05:25.266 user 0m1.189s 00:05:25.266 sys 0m0.272s 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.266 00:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.266 ************************************ 00:05:25.266 END TEST locking_overlapped_coremask_via_rpc 00:05:25.266 ************************************ 00:05:25.266 00:41:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:25.266 00:41:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:25.266 00:41:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2527453 ]] 00:05:25.266 00:41:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2527453 00:05:25.266 00:41:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2527453 ']' 00:05:25.266 00:41:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2527453 00:05:25.266 00:41:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:25.266 00:42:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.266 00:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2527453 00:05:25.524 00:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.524 00:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.524 00:42:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2527453' 00:05:25.524 killing process with pid 2527453 00:05:25.524 00:42:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2527453 00:05:25.524 00:42:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2527453 00:05:25.783 00:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2527478 ]] 00:05:25.783 00:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2527478 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2527478 ']' 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2527478 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2527478 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2527478' 00:05:25.783 killing process with pid 2527478 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2527478 00:05:25.783 00:42:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2527478 00:05:26.364 00:42:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.364 00:42:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:26.364 00:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2527453 ]] 00:05:26.364 00:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2527453 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2527453 ']' 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2527453 00:05:26.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2527453) - No such process 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2527453 is not found' 00:05:26.364 Process with pid 2527453 is not found 00:05:26.364 00:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2527478 ]] 00:05:26.364 00:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2527478 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2527478 ']' 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2527478 00:05:26.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2527478) - No such process 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2527478 is not found' 00:05:26.364 Process with pid 2527478 is not found 00:05:26.364 00:42:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.364 00:05:26.364 real 0m18.375s 00:05:26.364 user 0m32.720s 00:05:26.364 sys 0m5.642s 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.364 00:42:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.364 ************************************ 00:05:26.364 END TEST cpu_locks 00:05:26.364 ************************************ 00:05:26.364 00:42:00 event -- common/autotest_common.sh@1142 -- # return 0 00:05:26.364 00:05:26.364 real 0m42.471s 00:05:26.364 user 1m20.599s 00:05:26.364 sys 0m9.725s 00:05:26.364 00:42:00 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.364 00:42:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.364 ************************************ 00:05:26.364 END TEST event 00:05:26.364 ************************************ 00:05:26.364 00:42:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.364 00:42:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:26.364 00:42:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.364 00:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.364 00:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:26.364 ************************************ 00:05:26.364 START TEST thread 00:05:26.364 ************************************ 00:05:26.364 00:42:00 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:26.364 * Looking for test storage... 00:05:26.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:26.364 00:42:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.364 00:42:01 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:26.364 00:42:01 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.364 00:42:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.364 ************************************ 00:05:26.364 START TEST thread_poller_perf 00:05:26.364 ************************************ 00:05:26.364 00:42:01 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.364 [2024-07-16 00:42:01.067046] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:26.364 [2024-07-16 00:42:01.067106] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528033 ] 00:05:26.364 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.622 [2024-07-16 00:42:01.126027] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.622 [2024-07-16 00:42:01.234536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.622 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:27.992 ====================================== 00:05:27.992 busy:2715002368 (cyc) 00:05:27.992 total_run_count: 299000 00:05:27.992 tsc_hz: 2700000000 (cyc) 00:05:27.992 ====================================== 00:05:27.992 poller_cost: 9080 (cyc), 3362 (nsec) 00:05:27.993 00:05:27.993 real 0m1.313s 00:05:27.993 user 0m1.235s 00:05:27.993 sys 0m0.071s 00:05:27.993 00:42:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.993 00:42:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.993 ************************************ 00:05:27.993 END TEST thread_poller_perf 00:05:27.993 ************************************ 00:05:27.993 00:42:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:27.993 00:42:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.993 00:42:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:27.993 00:42:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.993 00:42:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.993 ************************************ 00:05:27.993 START TEST thread_poller_perf 00:05:27.993 ************************************ 00:05:27.993 00:42:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.993 [2024-07-16 00:42:02.427410] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:27.993 [2024-07-16 00:42:02.427476] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528235 ] 00:05:27.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.993 [2024-07-16 00:42:02.490216] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.993 [2024-07-16 00:42:02.608875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.993 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:29.363 ====================================== 00:05:29.363 busy:2702975842 (cyc) 00:05:29.363 total_run_count: 3855000 00:05:29.363 tsc_hz: 2700000000 (cyc) 00:05:29.363 ====================================== 00:05:29.363 poller_cost: 701 (cyc), 259 (nsec) 00:05:29.363 00:05:29.363 real 0m1.321s 00:05:29.363 user 0m1.232s 00:05:29.363 sys 0m0.082s 00:05:29.363 00:42:03 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.363 00:42:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.363 ************************************ 00:05:29.363 END TEST thread_poller_perf 00:05:29.363 ************************************ 00:05:29.363 00:42:03 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:29.363 00:42:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:29.363 00:05:29.363 real 0m2.769s 00:05:29.363 user 0m2.515s 00:05:29.363 sys 0m0.249s 00:05:29.363 00:42:03 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.363 00:42:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.363 ************************************ 00:05:29.363 END TEST thread 00:05:29.363 ************************************ 00:05:29.363 00:42:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.363 00:42:03 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:29.363 00:42:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.363 00:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.363 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:29.363 ************************************ 00:05:29.363 START TEST accel 00:05:29.363 ************************************ 00:05:29.363 00:42:03 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:29.363 * Looking for test storage... 00:05:29.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:29.363 00:42:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:29.363 00:42:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:29.364 00:42:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.364 00:42:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2528426 00:05:29.364 00:42:03 accel -- accel/accel.sh@63 -- # waitforlisten 2528426 00:05:29.364 00:42:03 accel -- common/autotest_common.sh@829 -- # '[' -z 2528426 ']' 00:05:29.364 00:42:03 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:29.364 00:42:03 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.364 00:42:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:29.364 00:42:03 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.364 00:42:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.364 00:42:03 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.364 00:42:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.364 00:42:03 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.364 00:42:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.364 00:42:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.364 00:42:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.364 00:42:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.364 00:42:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:29.364 00:42:03 accel -- accel/accel.sh@41 -- # jq -r . 00:05:29.364 [2024-07-16 00:42:03.908100] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:29.364 [2024-07-16 00:42:03.908193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528426 ] 00:05:29.364 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.364 [2024-07-16 00:42:03.968574] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.364 [2024-07-16 00:42:04.091356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.621 00:42:04 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.621 00:42:04 accel -- common/autotest_common.sh@862 -- # return 0 00:05:29.621 00:42:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:29.621 00:42:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:29.621 00:42:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:29.621 00:42:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:29.621 00:42:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:29.621 00:42:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:29.621 00:42:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.621 00:42:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.621 00:42:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:29.621 00:42:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.878 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.878 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.878 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.879 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.879 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.879 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.879 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.879 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.879 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.879 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.879 00:42:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:29.879 00:42:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:29.879 00:42:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:29.879 00:42:04 accel -- accel/accel.sh@75 -- # killprocess 2528426 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@948 -- # '[' -z 2528426 ']' 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@952 -- # kill -0 2528426 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@953 -- # uname 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2528426 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2528426' 00:05:29.879 killing process with pid 2528426 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@967 -- # kill 2528426 00:05:29.879 00:42:04 accel -- common/autotest_common.sh@972 -- # wait 2528426 00:05:30.136 00:42:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:30.136 00:42:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:30.136 00:42:04 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:30.136 00:42:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.136 00:42:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 00:42:04 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:30.394 00:42:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:30.394 00:42:04 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.394 00:42:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 00:42:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.394 00:42:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:30.394 00:42:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:30.394 00:42:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.394 00:42:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 ************************************ 00:05:30.394 START TEST accel_missing_filename 00:05:30.394 ************************************ 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.394 00:42:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:30.394 00:42:04 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:30.394 [2024-07-16 00:42:04.994120] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:30.394 [2024-07-16 00:42:04.994192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528596 ] 00:05:30.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.394 [2024-07-16 00:42:05.053578] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.652 [2024-07-16 00:42:05.163053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.652 [2024-07-16 00:42:05.216841] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.652 [2024-07-16 00:42:05.293263] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:30.652 A filename is required. 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.652 00:05:30.652 real 0m0.430s 00:05:30.652 user 0m0.321s 00:05:30.652 sys 0m0.142s 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.652 00:42:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:30.652 ************************************ 00:05:30.652 END TEST accel_missing_filename 00:05:30.652 ************************************ 00:05:30.910 00:42:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.910 00:42:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:30.910 00:42:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:30.910 00:42:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.910 00:42:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.910 ************************************ 00:05:30.910 START TEST accel_compress_verify 00:05:30.910 ************************************ 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.910 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:30.910 00:42:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:30.910 [2024-07-16 00:42:05.469266] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:30.910 [2024-07-16 00:42:05.469336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528741 ] 00:05:30.910 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.910 [2024-07-16 00:42:05.531817] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.910 [2024-07-16 00:42:05.651293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.168 [2024-07-16 00:42:05.712975] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.168 [2024-07-16 00:42:05.798229] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:31.168 00:05:31.168 Compression does not support the verify option, aborting. 00:05:31.168 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:31.168 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.168 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:31.168 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.168 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:31.168 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.168 00:05:31.168 real 0m0.474s 00:05:31.168 user 0m0.361s 00:05:31.168 sys 0m0.143s 00:05:31.168 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.426 00:42:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:31.426 ************************************ 00:05:31.426 END TEST accel_compress_verify 00:05:31.426 ************************************ 00:05:31.426 00:42:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.426 00:42:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:31.426 00:42:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.426 00:42:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.426 00:42:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.426 ************************************ 00:05:31.426 START TEST accel_wrong_workload 00:05:31.426 ************************************ 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:31.426 00:42:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:31.426 Unsupported workload type: foobar 00:05:31.426 [2024-07-16 00:42:05.986065] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:31.426 accel_perf options: 00:05:31.426 [-h help message] 00:05:31.426 [-q queue depth per core] 00:05:31.426 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:31.426 [-T number of threads per core 00:05:31.426 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:31.426 [-t time in seconds] 00:05:31.426 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:31.426 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:31.426 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:31.426 [-l for compress/decompress workloads, name of uncompressed input file 00:05:31.426 [-S for crc32c workload, use this seed value (default 0) 00:05:31.426 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:31.426 [-f for fill workload, use this BYTE value (default 255) 00:05:31.426 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:31.426 [-y verify result if this switch is on] 00:05:31.426 [-a tasks to allocate per core (default: same value as -q)] 00:05:31.426 Can be used to spread operations across a wider range of memory. 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:31.426 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.427 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.427 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.427 00:05:31.427 real 0m0.022s 00:05:31.427 user 0m0.013s 00:05:31.427 sys 0m0.009s 00:05:31.427 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.427 00:42:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:31.427 ************************************ 00:05:31.427 END TEST accel_wrong_workload 00:05:31.427 ************************************ 00:05:31.427 Error: writing output failed: Broken pipe 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.427 00:42:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.427 ************************************ 00:05:31.427 START TEST accel_negative_buffers 00:05:31.427 ************************************ 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:31.427 00:42:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:31.427 -x option must be non-negative. 00:05:31.427 [2024-07-16 00:42:06.059100] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:31.427 accel_perf options: 00:05:31.427 [-h help message] 00:05:31.427 [-q queue depth per core] 00:05:31.427 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:31.427 [-T number of threads per core 00:05:31.427 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:31.427 [-t time in seconds] 00:05:31.427 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:31.427 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:31.427 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:31.427 [-l for compress/decompress workloads, name of uncompressed input file 00:05:31.427 [-S for crc32c workload, use this seed value (default 0) 00:05:31.427 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:31.427 [-f for fill workload, use this BYTE value (default 255) 00:05:31.427 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:31.427 [-y verify result if this switch is on] 00:05:31.427 [-a tasks to allocate per core (default: same value as -q)] 00:05:31.427 Can be used to spread operations across a wider range of memory. 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.427 00:05:31.427 real 0m0.024s 00:05:31.427 user 0m0.014s 00:05:31.427 sys 0m0.010s 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.427 00:42:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:31.427 ************************************ 00:05:31.427 END TEST accel_negative_buffers 00:05:31.427 ************************************ 00:05:31.427 Error: writing output failed: Broken pipe 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.427 00:42:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.427 00:42:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.427 ************************************ 00:05:31.427 START TEST accel_crc32c 00:05:31.427 ************************************ 00:05:31.427 00:42:06 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:31.427 00:42:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:31.427 [2024-07-16 00:42:06.120781] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:31.427 [2024-07-16 00:42:06.120834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528813 ] 00:05:31.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.685 [2024-07-16 00:42:06.184265] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.685 [2024-07-16 00:42:06.303413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.685 00:42:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:33.058 00:42:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.058 00:05:33.058 real 0m1.466s 00:05:33.058 user 0m1.326s 00:05:33.058 sys 0m0.142s 00:05:33.058 00:42:07 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.058 00:42:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:33.058 ************************************ 00:05:33.058 END TEST accel_crc32c 00:05:33.058 ************************************ 00:05:33.058 00:42:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.058 00:42:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:33.058 00:42:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.058 00:42:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.058 00:42:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.058 ************************************ 00:05:33.058 START TEST accel_crc32c_C2 00:05:33.058 ************************************ 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.058 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:33.058 [2024-07-16 00:42:07.637252] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:33.058 [2024-07-16 00:42:07.637320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529236 ] 00:05:33.058 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.058 [2024-07-16 00:42:07.700995] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.317 [2024-07-16 00:42:07.821370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.317 00:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.692 00:05:34.692 real 0m1.466s 00:05:34.692 user 0m1.338s 00:05:34.692 sys 0m0.129s 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.692 00:42:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:34.692 ************************************ 00:05:34.692 END TEST accel_crc32c_C2 00:05:34.692 ************************************ 00:05:34.692 00:42:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.692 00:42:09 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:34.692 00:42:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.692 00:42:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.692 00:42:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.692 ************************************ 00:05:34.692 START TEST accel_copy 00:05:34.692 ************************************ 00:05:34.692 00:42:09 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:34.692 00:42:09 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:34.692 [2024-07-16 00:42:09.147752] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:34.692 [2024-07-16 00:42:09.147815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529721 ] 00:05:34.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.692 [2024-07-16 00:42:09.210466] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.693 [2024-07-16 00:42:09.328627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.693 00:42:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.066 00:42:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:36.067 00:42:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.067 00:05:36.067 real 0m1.457s 00:05:36.067 user 0m1.322s 00:05:36.067 sys 0m0.136s 00:05:36.067 00:42:10 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.067 00:42:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:36.067 ************************************ 00:05:36.067 END TEST accel_copy 00:05:36.067 ************************************ 00:05:36.067 00:42:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.067 00:42:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.067 00:42:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:36.067 00:42:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.067 00:42:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.067 ************************************ 00:05:36.067 START TEST accel_fill 00:05:36.067 ************************************ 00:05:36.067 00:42:10 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:36.067 00:42:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:36.067 [2024-07-16 00:42:10.651945] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:36.067 [2024-07-16 00:42:10.652008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529907 ] 00:05:36.067 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.067 [2024-07-16 00:42:10.714269] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.325 [2024-07-16 00:42:10.833580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.325 00:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:37.699 00:42:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.699 00:05:37.699 real 0m1.479s 00:05:37.699 user 0m1.338s 00:05:37.699 sys 0m0.143s 00:05:37.699 00:42:12 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.699 00:42:12 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:37.699 ************************************ 00:05:37.699 END TEST accel_fill 00:05:37.699 ************************************ 00:05:37.699 00:42:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.699 00:42:12 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:37.699 00:42:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.699 00:42:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.699 00:42:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.699 ************************************ 00:05:37.699 START TEST accel_copy_crc32c 00:05:37.699 ************************************ 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:37.699 [2024-07-16 00:42:12.175423] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:37.699 [2024-07-16 00:42:12.175484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530179 ] 00:05:37.699 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.699 [2024-07-16 00:42:12.232309] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.699 [2024-07-16 00:42:12.335460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.699 00:42:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.168 00:05:39.168 real 0m1.453s 00:05:39.168 user 0m1.322s 00:05:39.168 sys 0m0.132s 00:05:39.168 00:42:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.169 00:42:13 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:39.169 ************************************ 00:05:39.169 END TEST accel_copy_crc32c 00:05:39.169 ************************************ 00:05:39.169 00:42:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.169 00:42:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:39.169 00:42:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:39.169 00:42:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.169 00:42:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.169 ************************************ 00:05:39.169 START TEST accel_copy_crc32c_C2 00:05:39.169 ************************************ 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.169 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:39.169 [2024-07-16 00:42:13.681466] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:39.169 [2024-07-16 00:42:13.681531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530339 ] 00:05:39.169 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.169 [2024-07-16 00:42:13.745326] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.169 [2024-07-16 00:42:13.868196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.426 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.427 00:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.801 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:40.801 00:42:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.801 00:05:40.801 real 0m1.492s 00:05:40.801 user 0m1.349s 00:05:40.801 sys 0m0.146s 00:05:40.801 00:42:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.801 00:42:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:40.801 ************************************ 00:05:40.801 END TEST accel_copy_crc32c_C2 00:05:40.801 ************************************ 00:05:40.801 00:42:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.801 00:42:15 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:40.801 00:42:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.801 00:42:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.801 00:42:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.801 ************************************ 00:05:40.801 START TEST accel_dualcast 00:05:40.801 ************************************ 00:05:40.801 00:42:15 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:40.801 [2024-07-16 00:42:15.219036] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:40.801 [2024-07-16 00:42:15.219101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530503 ] 00:05:40.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.801 [2024-07-16 00:42:15.280391] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.801 [2024-07-16 00:42:15.405529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 00:42:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:42.175 00:42:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.175 00:05:42.175 real 0m1.483s 00:05:42.175 user 0m1.339s 00:05:42.175 sys 0m0.146s 00:05:42.175 00:42:16 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.175 00:42:16 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:42.175 ************************************ 00:05:42.175 END TEST accel_dualcast 00:05:42.175 ************************************ 00:05:42.175 00:42:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.175 00:42:16 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:42.175 00:42:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.175 00:42:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.175 00:42:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.176 ************************************ 00:05:42.176 START TEST accel_compare 00:05:42.176 ************************************ 00:05:42.176 00:42:16 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:42.176 00:42:16 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:42.176 [2024-07-16 00:42:16.752362] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:42.176 [2024-07-16 00:42:16.752428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530771 ] 00:05:42.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.176 [2024-07-16 00:42:16.815467] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.435 [2024-07-16 00:42:16.938523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.435 00:42:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.435 00:42:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.435 00:42:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 00:42:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.809 00:42:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 00:42:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:43.810 00:42:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.810 00:05:43.810 real 0m1.489s 00:05:43.810 user 0m1.340s 00:05:43.810 sys 0m0.150s 00:05:43.810 00:42:18 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.810 00:42:18 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:43.810 ************************************ 00:05:43.810 END TEST accel_compare 00:05:43.810 ************************************ 00:05:43.810 00:42:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.810 00:42:18 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:43.810 00:42:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:43.810 00:42:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.810 00:42:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.810 ************************************ 00:05:43.810 START TEST accel_xor 00:05:43.810 ************************************ 00:05:43.810 00:42:18 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:43.810 [2024-07-16 00:42:18.284780] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:43.810 [2024-07-16 00:42:18.284847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530934 ] 00:05:43.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.810 [2024-07-16 00:42:18.346769] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.810 [2024-07-16 00:42:18.470264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.810 00:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.183 00:05:45.183 real 0m1.487s 00:05:45.183 user 0m1.337s 00:05:45.183 sys 0m0.152s 00:05:45.183 00:42:19 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.183 00:42:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:45.183 ************************************ 00:05:45.183 END TEST accel_xor 00:05:45.183 ************************************ 00:05:45.183 00:42:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.183 00:42:19 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:45.183 00:42:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.183 00:42:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.183 00:42:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.183 ************************************ 00:05:45.183 START TEST accel_xor 00:05:45.183 ************************************ 00:05:45.183 00:42:19 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:45.183 00:42:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:45.183 [2024-07-16 00:42:19.819598] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:45.183 [2024-07-16 00:42:19.819663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531091 ] 00:05:45.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.183 [2024-07-16 00:42:19.883980] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.442 [2024-07-16 00:42:20.007104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 00:42:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:46.817 00:42:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.817 00:05:46.817 real 0m1.485s 00:05:46.817 user 0m1.343s 00:05:46.817 sys 0m0.144s 00:05:46.817 00:42:21 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.817 00:42:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:46.817 ************************************ 00:05:46.817 END TEST accel_xor 00:05:46.817 ************************************ 00:05:46.817 00:42:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.817 00:42:21 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:46.817 00:42:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:46.817 00:42:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.817 00:42:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.817 ************************************ 00:05:46.817 START TEST accel_dif_verify 00:05:46.817 ************************************ 00:05:46.817 00:42:21 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:46.817 00:42:21 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:46.817 [2024-07-16 00:42:21.351171] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:46.817 [2024-07-16 00:42:21.351235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531363 ] 00:05:46.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.817 [2024-07-16 00:42:21.416895] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.817 [2024-07-16 00:42:21.539695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:47.077 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.078 00:42:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:48.452 00:42:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.452 00:05:48.452 real 0m1.489s 00:05:48.452 user 0m1.337s 00:05:48.452 sys 0m0.155s 00:05:48.452 00:42:22 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.452 00:42:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:48.453 ************************************ 00:05:48.453 END TEST accel_dif_verify 00:05:48.453 ************************************ 00:05:48.453 00:42:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.453 00:42:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:48.453 00:42:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:48.453 00:42:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.453 00:42:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.453 ************************************ 00:05:48.453 START TEST accel_dif_generate 00:05:48.453 ************************************ 00:05:48.453 00:42:22 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:48.453 00:42:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:48.453 [2024-07-16 00:42:22.886901] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:48.453 [2024-07-16 00:42:22.886966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531523 ] 00:05:48.453 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.453 [2024-07-16 00:42:22.952782] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.453 [2024-07-16 00:42:23.075819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.453 00:42:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:49.825 00:42:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.825 00:05:49.825 real 0m1.493s 00:05:49.825 user 0m1.350s 00:05:49.825 sys 0m0.146s 00:05:49.825 00:42:24 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.825 00:42:24 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:49.825 ************************************ 00:05:49.825 END TEST accel_dif_generate 00:05:49.825 ************************************ 00:05:49.825 00:42:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.825 00:42:24 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:49.825 00:42:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:49.825 00:42:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.825 00:42:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.825 ************************************ 00:05:49.825 START TEST accel_dif_generate_copy 00:05:49.825 ************************************ 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:49.825 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:49.826 [2024-07-16 00:42:24.424546] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:49.826 [2024-07-16 00:42:24.424607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531684 ] 00:05:49.826 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.826 [2024-07-16 00:42:24.491348] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.084 [2024-07-16 00:42:24.612980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.084 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.085 00:42:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.458 00:05:51.458 real 0m1.483s 00:05:51.458 user 0m1.342s 00:05:51.458 sys 0m0.142s 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.458 00:42:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:51.458 ************************************ 00:05:51.458 END TEST accel_dif_generate_copy 00:05:51.458 ************************************ 00:05:51.458 00:42:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.458 00:42:25 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:51.458 00:42:25 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.458 00:42:25 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:51.458 00:42:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.458 00:42:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.458 ************************************ 00:05:51.458 START TEST accel_comp 00:05:51.458 ************************************ 00:05:51.458 00:42:25 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:51.458 00:42:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:51.458 [2024-07-16 00:42:25.952732] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:51.458 [2024-07-16 00:42:25.952797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531953 ] 00:05:51.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.458 [2024-07-16 00:42:26.016665] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.458 [2024-07-16 00:42:26.139641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.458 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.459 00:42:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.827 00:42:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:52.828 00:42:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.828 00:05:52.828 real 0m1.489s 00:05:52.828 user 0m1.349s 00:05:52.828 sys 0m0.143s 00:05:52.828 00:42:27 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.828 00:42:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:52.828 ************************************ 00:05:52.828 END TEST accel_comp 00:05:52.828 ************************************ 00:05:52.828 00:42:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.828 00:42:27 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.828 00:42:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:52.828 00:42:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.828 00:42:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.828 ************************************ 00:05:52.828 START TEST accel_decomp 00:05:52.828 ************************************ 00:05:52.828 00:42:27 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:52.828 00:42:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:52.828 [2024-07-16 00:42:27.486752] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:52.828 [2024-07-16 00:42:27.486815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532119 ] 00:05:52.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.828 [2024-07-16 00:42:27.554447] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.085 [2024-07-16 00:42:27.678161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 00:42:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.457 00:42:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.457 00:05:54.457 real 0m1.498s 00:05:54.457 user 0m1.344s 00:05:54.457 sys 0m0.157s 00:05:54.457 00:42:28 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.457 00:42:28 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:54.457 ************************************ 00:05:54.457 END TEST accel_decomp 00:05:54.457 ************************************ 00:05:54.457 00:42:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.457 00:42:28 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.457 00:42:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:54.457 00:42:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.457 00:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.457 ************************************ 00:05:54.457 START TEST accel_decomp_full 00:05:54.457 ************************************ 00:05:54.457 00:42:29 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:54.457 00:42:29 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:54.457 [2024-07-16 00:42:29.032600] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:54.457 [2024-07-16 00:42:29.032665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532274 ] 00:05:54.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.457 [2024-07-16 00:42:29.100133] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.715 [2024-07-16 00:42:29.222761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.715 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.716 00:42:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.119 00:42:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.119 00:05:56.119 real 0m1.504s 00:05:56.119 user 0m1.367s 00:05:56.119 sys 0m0.140s 00:05:56.119 00:42:30 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.119 00:42:30 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:56.119 ************************************ 00:05:56.119 END TEST accel_decomp_full 00:05:56.119 ************************************ 00:05:56.119 00:42:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.119 00:42:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.119 00:42:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:56.119 00:42:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.119 00:42:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.119 ************************************ 00:05:56.119 START TEST accel_decomp_mcore 00:05:56.119 ************************************ 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:56.119 [2024-07-16 00:42:30.590643] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:56.119 [2024-07-16 00:42:30.590708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532548 ] 00:05:56.119 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.119 [2024-07-16 00:42:30.656209] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.119 [2024-07-16 00:42:30.780990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.119 [2024-07-16 00:42:30.781044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.119 [2024-07-16 00:42:30.781097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.119 [2024-07-16 00:42:30.781100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.119 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.120 00:42:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.491 00:05:57.491 real 0m1.489s 00:05:57.491 user 0m4.773s 00:05:57.491 sys 0m0.152s 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.491 00:42:32 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:57.491 ************************************ 00:05:57.491 END TEST accel_decomp_mcore 00:05:57.491 ************************************ 00:05:57.491 00:42:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.491 00:42:32 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.491 00:42:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:57.491 00:42:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.491 00:42:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.491 ************************************ 00:05:57.491 START TEST accel_decomp_full_mcore 00:05:57.491 ************************************ 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:57.491 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.492 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.492 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.492 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.492 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.492 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:57.492 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:57.492 [2024-07-16 00:42:32.129296] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:57.492 [2024-07-16 00:42:32.129364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532714 ] 00:05:57.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.492 [2024-07-16 00:42:32.194478] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.750 [2024-07-16 00:42:32.320527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.750 [2024-07-16 00:42:32.320583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.750 [2024-07-16 00:42:32.320636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.750 [2024-07-16 00:42:32.320639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.750 00:42:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.124 00:05:59.124 real 0m1.514s 00:05:59.124 user 0m4.864s 00:05:59.124 sys 0m0.156s 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.124 00:42:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:59.124 ************************************ 00:05:59.124 END TEST accel_decomp_full_mcore 00:05:59.124 ************************************ 00:05:59.124 00:42:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.124 00:42:33 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.124 00:42:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:59.124 00:42:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.125 00:42:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.125 ************************************ 00:05:59.125 START TEST accel_decomp_mthread 00:05:59.125 ************************************ 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:59.125 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:59.125 [2024-07-16 00:42:33.690582] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:05:59.125 [2024-07-16 00:42:33.690646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532932 ] 00:05:59.125 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.125 [2024-07-16 00:42:33.754563] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.125 [2024-07-16 00:42:33.877202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.383 00:42:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.755 00:06:00.755 real 0m1.501s 00:06:00.755 user 0m1.349s 00:06:00.755 sys 0m0.155s 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.755 00:42:35 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:00.755 ************************************ 00:06:00.755 END TEST accel_decomp_mthread 00:06:00.755 ************************************ 00:06:00.755 00:42:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.755 00:42:35 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.755 00:42:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:00.755 00:42:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.755 00:42:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.755 ************************************ 00:06:00.755 START TEST accel_decomp_full_mthread 00:06:00.755 ************************************ 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.755 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:00.756 [2024-07-16 00:42:35.237486] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:00.756 [2024-07-16 00:42:35.237553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533142 ] 00:06:00.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.756 [2024-07-16 00:42:35.303979] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.756 [2024-07-16 00:42:35.425434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.756 00:42:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.127 00:06:02.127 real 0m1.515s 00:06:02.127 user 0m1.368s 00:06:02.127 sys 0m0.149s 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.127 00:42:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:02.127 ************************************ 00:06:02.127 END TEST accel_decomp_full_mthread 00:06:02.127 ************************************ 00:06:02.127 00:42:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.127 00:42:36 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:02.127 00:42:36 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:02.127 00:42:36 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:02.127 00:42:36 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:02.127 00:42:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.127 00:42:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.127 00:42:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.127 00:42:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.127 00:42:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.127 00:42:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.127 00:42:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.127 00:42:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:02.127 00:42:36 accel -- accel/accel.sh@41 -- # jq -r . 00:06:02.127 ************************************ 00:06:02.127 START TEST accel_dif_functional_tests 00:06:02.127 ************************************ 00:06:02.127 00:42:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:02.127 [2024-07-16 00:42:36.819586] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:02.127 [2024-07-16 00:42:36.819647] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533310 ] 00:06:02.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.127 [2024-07-16 00:42:36.879489] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.385 [2024-07-16 00:42:37.004731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.385 [2024-07-16 00:42:37.004785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.385 [2024-07-16 00:42:37.004788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.385 00:06:02.385 00:06:02.385 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.385 http://cunit.sourceforge.net/ 00:06:02.385 00:06:02.385 00:06:02.385 Suite: accel_dif 00:06:02.386 Test: verify: DIF generated, GUARD check ...passed 00:06:02.386 Test: verify: DIF generated, APPTAG check ...passed 00:06:02.386 Test: verify: DIF generated, REFTAG check ...passed 00:06:02.386 Test: verify: DIF not generated, GUARD check ...[2024-07-16 00:42:37.104387] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:02.386 passed 00:06:02.386 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 00:42:37.104461] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:02.386 passed 00:06:02.386 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 00:42:37.104508] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:02.386 passed 00:06:02.386 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:02.386 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 00:42:37.104583] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:02.386 passed 00:06:02.386 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:02.386 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:02.386 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:02.386 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 00:42:37.104739] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:02.386 passed 00:06:02.386 Test: verify copy: DIF generated, GUARD check ...passed 00:06:02.386 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:02.386 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:02.386 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 00:42:37.104928] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:02.386 passed 00:06:02.386 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 00:42:37.104974] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:02.386 passed 00:06:02.386 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 00:42:37.105012] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:02.386 passed 00:06:02.386 Test: generate copy: DIF generated, GUARD check ...passed 00:06:02.386 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:02.386 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:02.386 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:02.386 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:02.386 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:02.386 Test: generate copy: iovecs-len validate ...[2024-07-16 00:42:37.105273] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:02.386 passed 00:06:02.386 Test: generate copy: buffer alignment validate ...passed 00:06:02.386 00:06:02.386 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.386 suites 1 1 n/a 0 0 00:06:02.386 tests 26 26 26 0 0 00:06:02.386 asserts 115 115 115 0 n/a 00:06:02.386 00:06:02.386 Elapsed time = 0.003 seconds 00:06:02.644 00:06:02.644 real 0m0.592s 00:06:02.644 user 0m0.883s 00:06:02.644 sys 0m0.193s 00:06:02.644 00:42:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.644 00:42:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:02.644 ************************************ 00:06:02.644 END TEST accel_dif_functional_tests 00:06:02.644 ************************************ 00:06:02.644 00:42:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.644 00:06:02.644 real 0m33.590s 00:06:02.644 user 0m36.999s 00:06:02.644 sys 0m4.592s 00:06:02.644 00:42:37 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.644 00:42:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.644 ************************************ 00:06:02.644 END TEST accel 00:06:02.644 ************************************ 00:06:02.903 00:42:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.903 00:42:37 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:02.903 00:42:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.903 00:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.903 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.903 ************************************ 00:06:02.903 START TEST accel_rpc 00:06:02.903 ************************************ 00:06:02.903 00:42:37 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:02.903 * Looking for test storage... 00:06:02.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:02.903 00:42:37 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.903 00:42:37 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2533492 00:06:02.903 00:42:37 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:02.903 00:42:37 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2533492 00:06:02.903 00:42:37 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2533492 ']' 00:06:02.903 00:42:37 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.903 00:42:37 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.903 00:42:37 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.903 00:42:37 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.903 00:42:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.903 [2024-07-16 00:42:37.545788] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:02.903 [2024-07-16 00:42:37.545890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533492 ] 00:06:02.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.903 [2024-07-16 00:42:37.602466] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.162 [2024-07-16 00:42:37.708609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.162 00:42:37 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.162 00:42:37 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:03.162 00:42:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:03.162 00:42:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:03.162 00:42:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:03.162 00:42:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:03.162 00:42:37 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:03.162 00:42:37 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.162 00:42:37 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.162 00:42:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.162 ************************************ 00:06:03.162 START TEST accel_assign_opcode 00:06:03.162 ************************************ 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.162 [2024-07-16 00:42:37.769224] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.162 [2024-07-16 00:42:37.777227] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.162 00:42:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.420 software 00:06:03.420 00:06:03.420 real 0m0.298s 00:06:03.420 user 0m0.041s 00:06:03.420 sys 0m0.006s 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.420 00:42:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.420 ************************************ 00:06:03.420 END TEST accel_assign_opcode 00:06:03.420 ************************************ 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.420 00:42:38 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2533492 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2533492 ']' 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2533492 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533492 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533492' 00:06:03.420 killing process with pid 2533492 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@967 -- # kill 2533492 00:06:03.420 00:42:38 accel_rpc -- common/autotest_common.sh@972 -- # wait 2533492 00:06:03.988 00:06:03.988 real 0m1.143s 00:06:03.988 user 0m1.080s 00:06:03.988 sys 0m0.419s 00:06:03.988 00:42:38 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.988 00:42:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.988 ************************************ 00:06:03.988 END TEST accel_rpc 00:06:03.988 ************************************ 00:06:03.988 00:42:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.988 00:42:38 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:03.988 00:42:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.988 00:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.988 00:42:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.988 ************************************ 00:06:03.988 START TEST app_cmdline 00:06:03.988 ************************************ 00:06:03.988 00:42:38 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:03.988 * Looking for test storage... 00:06:03.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:03.988 00:42:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:03.988 00:42:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2533698 00:06:03.988 00:42:38 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:03.988 00:42:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2533698 00:06:03.988 00:42:38 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2533698 ']' 00:06:03.988 00:42:38 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.988 00:42:38 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.988 00:42:38 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.988 00:42:38 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.988 00:42:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.988 [2024-07-16 00:42:38.744216] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:03.988 [2024-07-16 00:42:38.744319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533698 ] 00:06:04.247 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.247 [2024-07-16 00:42:38.814067] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.247 [2024-07-16 00:42:38.932661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.505 00:42:39 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.505 00:42:39 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:04.505 00:42:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:04.763 { 00:06:04.763 "version": "SPDK v24.09-pre git sha1 8c20d24e0", 00:06:04.763 "fields": { 00:06:04.763 "major": 24, 00:06:04.763 "minor": 9, 00:06:04.763 "patch": 0, 00:06:04.763 "suffix": "-pre", 00:06:04.763 "commit": "8c20d24e0" 00:06:04.763 } 00:06:04.763 } 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:04.763 00:42:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:04.763 00:42:39 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.021 request: 00:06:05.021 { 00:06:05.021 "method": "env_dpdk_get_mem_stats", 00:06:05.021 "req_id": 1 00:06:05.021 } 00:06:05.021 Got JSON-RPC error response 00:06:05.021 response: 00:06:05.021 { 00:06:05.021 "code": -32601, 00:06:05.021 "message": "Method not found" 00:06:05.021 } 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.021 00:42:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2533698 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2533698 ']' 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2533698 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533698 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533698' 00:06:05.021 killing process with pid 2533698 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@967 -- # kill 2533698 00:06:05.021 00:42:39 app_cmdline -- common/autotest_common.sh@972 -- # wait 2533698 00:06:05.588 00:06:05.588 real 0m1.598s 00:06:05.588 user 0m1.921s 00:06:05.588 sys 0m0.483s 00:06:05.588 00:42:40 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.588 00:42:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.588 ************************************ 00:06:05.588 END TEST app_cmdline 00:06:05.588 ************************************ 00:06:05.588 00:42:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.588 00:42:40 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:05.588 00:42:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.588 00:42:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.588 00:42:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.588 ************************************ 00:06:05.588 START TEST version 00:06:05.588 ************************************ 00:06:05.588 00:42:40 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:05.588 * Looking for test storage... 00:06:05.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:05.588 00:42:40 version -- app/version.sh@17 -- # get_header_version major 00:06:05.588 00:42:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.588 00:42:40 version -- app/version.sh@14 -- # cut -f2 00:06:05.588 00:42:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.588 00:42:40 version -- app/version.sh@17 -- # major=24 00:06:05.588 00:42:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:05.588 00:42:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.588 00:42:40 version -- app/version.sh@14 -- # cut -f2 00:06:05.588 00:42:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.847 00:42:40 version -- app/version.sh@18 -- # minor=9 00:06:05.847 00:42:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:05.847 00:42:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.847 00:42:40 version -- app/version.sh@14 -- # cut -f2 00:06:05.847 00:42:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.847 00:42:40 version -- app/version.sh@19 -- # patch=0 00:06:05.847 00:42:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:05.847 00:42:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.847 00:42:40 version -- app/version.sh@14 -- # cut -f2 00:06:05.847 00:42:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.847 00:42:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:05.847 00:42:40 version -- app/version.sh@22 -- # version=24.9 00:06:05.847 00:42:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:05.847 00:42:40 version -- app/version.sh@28 -- # version=24.9rc0 00:06:05.847 00:42:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:05.847 00:42:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:05.847 00:42:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:05.847 00:42:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:05.847 00:06:05.847 real 0m0.109s 00:06:05.847 user 0m0.055s 00:06:05.847 sys 0m0.076s 00:06:05.847 00:42:40 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.847 00:42:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:05.847 ************************************ 00:06:05.847 END TEST version 00:06:05.847 ************************************ 00:06:05.847 00:42:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.847 00:42:40 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@198 -- # uname -s 00:06:05.847 00:42:40 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:05.847 00:42:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:05.847 00:42:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:05.847 00:42:40 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:05.847 00:42:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.847 00:42:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.847 00:42:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:05.847 00:42:40 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:05.847 00:42:40 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:05.847 00:42:40 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:05.847 00:42:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.847 00:42:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.847 ************************************ 00:06:05.847 START TEST nvmf_tcp 00:06:05.847 ************************************ 00:06:05.847 00:42:40 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:05.847 * Looking for test storage... 00:06:05.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.847 00:42:40 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.848 00:42:40 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.848 00:42:40 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.848 00:42:40 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.848 00:42:40 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.848 00:42:40 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.848 00:42:40 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.848 00:42:40 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:05.848 00:42:40 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:05.848 00:42:40 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.848 00:42:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:05.848 00:42:40 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:05.848 00:42:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:05.848 00:42:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.848 00:42:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.848 ************************************ 00:06:05.848 START TEST nvmf_example 00:06:05.848 ************************************ 00:06:05.848 00:42:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:05.848 * Looking for test storage... 00:06:05.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.848 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.848 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:06.107 00:42:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.009 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:08.009 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:08.009 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:08.009 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:08.009 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:08.009 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:08.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:08.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:08.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:08.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:08.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:06:08.010 00:06:08.010 --- 10.0.0.2 ping statistics --- 00:06:08.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.010 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:06:08.010 00:06:08.010 --- 10.0.0.1 ping statistics --- 00:06:08.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.010 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2535604 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2535604 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2535604 ']' 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.010 00:42:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:09.204 00:42:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:09.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.433 Initializing NVMe Controllers 00:06:21.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:21.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:21.433 Initialization complete. Launching workers. 00:06:21.433 ======================================================== 00:06:21.433 Latency(us) 00:06:21.433 Device Information : IOPS MiB/s Average min max 00:06:21.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14955.10 58.42 4281.40 895.91 15675.89 00:06:21.433 ======================================================== 00:06:21.433 Total : 14955.10 58.42 4281.40 895.91 15675.89 00:06:21.433 00:06:21.433 00:42:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:21.433 00:42:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:21.434 rmmod nvme_tcp 00:06:21.434 rmmod nvme_fabrics 00:06:21.434 rmmod nvme_keyring 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2535604 ']' 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2535604 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2535604 ']' 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2535604 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2535604 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2535604' 00:06:21.434 killing process with pid 2535604 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2535604 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2535604 00:06:21.434 nvmf threads initialize successfully 00:06:21.434 bdev subsystem init successfully 00:06:21.434 created a nvmf target service 00:06:21.434 create targets's poll groups done 00:06:21.434 all subsystems of target started 00:06:21.434 nvmf target is running 00:06:21.434 all subsystems of target stopped 00:06:21.434 destroy targets's poll groups done 00:06:21.434 destroyed the nvmf target service 00:06:21.434 bdev subsystem finish successfully 00:06:21.434 nvmf threads destroy successfully 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.434 00:42:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.692 00:42:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:21.692 00:42:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:21.692 00:42:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.692 00:42:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.956 00:06:21.956 real 0m15.909s 00:06:21.956 user 0m45.373s 00:06:21.956 sys 0m3.190s 00:06:21.956 00:42:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.956 00:42:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.956 ************************************ 00:06:21.956 END TEST nvmf_example 00:06:21.956 ************************************ 00:06:21.956 00:42:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:21.956 00:42:56 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:21.956 00:42:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.956 00:42:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.956 00:42:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.956 ************************************ 00:06:21.956 START TEST nvmf_filesystem 00:06:21.956 ************************************ 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:21.956 * Looking for test storage... 00:06:21.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:21.956 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:21.957 #define SPDK_CONFIG_H 00:06:21.957 #define SPDK_CONFIG_APPS 1 00:06:21.957 #define SPDK_CONFIG_ARCH native 00:06:21.957 #undef SPDK_CONFIG_ASAN 00:06:21.957 #undef SPDK_CONFIG_AVAHI 00:06:21.957 #undef SPDK_CONFIG_CET 00:06:21.957 #define SPDK_CONFIG_COVERAGE 1 00:06:21.957 #define SPDK_CONFIG_CROSS_PREFIX 00:06:21.957 #undef SPDK_CONFIG_CRYPTO 00:06:21.957 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:21.957 #undef SPDK_CONFIG_CUSTOMOCF 00:06:21.957 #undef SPDK_CONFIG_DAOS 00:06:21.957 #define SPDK_CONFIG_DAOS_DIR 00:06:21.957 #define SPDK_CONFIG_DEBUG 1 00:06:21.957 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:21.957 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:21.957 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:21.957 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:21.957 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:21.957 #undef SPDK_CONFIG_DPDK_UADK 00:06:21.957 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:21.957 #define SPDK_CONFIG_EXAMPLES 1 00:06:21.957 #undef SPDK_CONFIG_FC 00:06:21.957 #define SPDK_CONFIG_FC_PATH 00:06:21.957 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:21.957 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:21.957 #undef SPDK_CONFIG_FUSE 00:06:21.957 #undef SPDK_CONFIG_FUZZER 00:06:21.957 #define SPDK_CONFIG_FUZZER_LIB 00:06:21.957 #undef SPDK_CONFIG_GOLANG 00:06:21.957 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:21.957 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:21.957 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:21.957 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:21.957 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:21.957 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:21.957 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:21.957 #define SPDK_CONFIG_IDXD 1 00:06:21.957 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:21.957 #undef SPDK_CONFIG_IPSEC_MB 00:06:21.957 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:21.957 #define SPDK_CONFIG_ISAL 1 00:06:21.957 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:21.957 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:21.957 #define SPDK_CONFIG_LIBDIR 00:06:21.957 #undef SPDK_CONFIG_LTO 00:06:21.957 #define SPDK_CONFIG_MAX_LCORES 128 00:06:21.957 #define SPDK_CONFIG_NVME_CUSE 1 00:06:21.957 #undef SPDK_CONFIG_OCF 00:06:21.957 #define SPDK_CONFIG_OCF_PATH 00:06:21.957 #define SPDK_CONFIG_OPENSSL_PATH 00:06:21.957 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:21.957 #define SPDK_CONFIG_PGO_DIR 00:06:21.957 #undef SPDK_CONFIG_PGO_USE 00:06:21.957 #define SPDK_CONFIG_PREFIX /usr/local 00:06:21.957 #undef SPDK_CONFIG_RAID5F 00:06:21.957 #undef SPDK_CONFIG_RBD 00:06:21.957 #define SPDK_CONFIG_RDMA 1 00:06:21.957 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:21.957 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:21.957 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:21.957 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:21.957 #define SPDK_CONFIG_SHARED 1 00:06:21.957 #undef SPDK_CONFIG_SMA 00:06:21.957 #define SPDK_CONFIG_TESTS 1 00:06:21.957 #undef SPDK_CONFIG_TSAN 00:06:21.957 #define SPDK_CONFIG_UBLK 1 00:06:21.957 #define SPDK_CONFIG_UBSAN 1 00:06:21.957 #undef SPDK_CONFIG_UNIT_TESTS 00:06:21.957 #undef SPDK_CONFIG_URING 00:06:21.957 #define SPDK_CONFIG_URING_PATH 00:06:21.957 #undef SPDK_CONFIG_URING_ZNS 00:06:21.957 #undef SPDK_CONFIG_USDT 00:06:21.957 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:21.957 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:21.957 #define SPDK_CONFIG_VFIO_USER 1 00:06:21.957 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:21.957 #define SPDK_CONFIG_VHOST 1 00:06:21.957 #define SPDK_CONFIG_VIRTIO 1 00:06:21.957 #undef SPDK_CONFIG_VTUNE 00:06:21.957 #define SPDK_CONFIG_VTUNE_DIR 00:06:21.957 #define SPDK_CONFIG_WERROR 1 00:06:21.957 #define SPDK_CONFIG_WPDK_DIR 00:06:21.957 #undef SPDK_CONFIG_XNVME 00:06:21.957 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:21.957 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:21.958 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2537426 ]] 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2537426 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.qmNJkx 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.qmNJkx/tests/target /tmp/spdk.qmNJkx 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55528304640 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6466387968 00:06:21.959 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996000768 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1347584 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:21.960 * Looking for test storage... 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55528304640 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8680980480 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.960 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:21.961 00:42:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:24.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:24.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:24.496 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:24.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:24.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:24.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:06:24.497 00:06:24.497 --- 10.0.0.2 ping statistics --- 00:06:24.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.497 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:06:24.497 00:06:24.497 --- 10.0.0.1 ping statistics --- 00:06:24.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.497 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.497 ************************************ 00:06:24.497 START TEST nvmf_filesystem_no_in_capsule 00:06:24.497 ************************************ 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2539056 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2539056 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2539056 ']' 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.497 00:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.497 [2024-07-16 00:42:58.948672] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:24.497 [2024-07-16 00:42:58.948747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.497 [2024-07-16 00:42:59.019082] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.497 [2024-07-16 00:42:59.144156] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.497 [2024-07-16 00:42:59.144235] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.497 [2024-07-16 00:42:59.144251] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.497 [2024-07-16 00:42:59.144269] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.497 [2024-07-16 00:42:59.144281] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.497 [2024-07-16 00:42:59.144375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.497 [2024-07-16 00:42:59.144430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.497 [2024-07-16 00:42:59.144486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.497 [2024-07-16 00:42:59.144489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.431 [2024-07-16 00:42:59.960265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.431 00:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.431 Malloc1 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.431 [2024-07-16 00:43:00.141514] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:25.431 { 00:06:25.431 "name": "Malloc1", 00:06:25.431 "aliases": [ 00:06:25.431 "24d432a9-7cd2-44b2-8369-73d884fa4518" 00:06:25.431 ], 00:06:25.431 "product_name": "Malloc disk", 00:06:25.431 "block_size": 512, 00:06:25.431 "num_blocks": 1048576, 00:06:25.431 "uuid": "24d432a9-7cd2-44b2-8369-73d884fa4518", 00:06:25.431 "assigned_rate_limits": { 00:06:25.431 "rw_ios_per_sec": 0, 00:06:25.431 "rw_mbytes_per_sec": 0, 00:06:25.431 "r_mbytes_per_sec": 0, 00:06:25.431 "w_mbytes_per_sec": 0 00:06:25.431 }, 00:06:25.431 "claimed": true, 00:06:25.431 "claim_type": "exclusive_write", 00:06:25.431 "zoned": false, 00:06:25.431 "supported_io_types": { 00:06:25.431 "read": true, 00:06:25.431 "write": true, 00:06:25.431 "unmap": true, 00:06:25.431 "flush": true, 00:06:25.431 "reset": true, 00:06:25.431 "nvme_admin": false, 00:06:25.431 "nvme_io": false, 00:06:25.431 "nvme_io_md": false, 00:06:25.431 "write_zeroes": true, 00:06:25.431 "zcopy": true, 00:06:25.431 "get_zone_info": false, 00:06:25.431 "zone_management": false, 00:06:25.431 "zone_append": false, 00:06:25.431 "compare": false, 00:06:25.431 "compare_and_write": false, 00:06:25.431 "abort": true, 00:06:25.431 "seek_hole": false, 00:06:25.431 "seek_data": false, 00:06:25.431 "copy": true, 00:06:25.431 "nvme_iov_md": false 00:06:25.431 }, 00:06:25.431 "memory_domains": [ 00:06:25.431 { 00:06:25.431 "dma_device_id": "system", 00:06:25.431 "dma_device_type": 1 00:06:25.431 }, 00:06:25.431 { 00:06:25.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.431 "dma_device_type": 2 00:06:25.431 } 00:06:25.431 ], 00:06:25.431 "driver_specific": {} 00:06:25.431 } 00:06:25.431 ]' 00:06:25.431 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:25.689 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:25.689 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:25.689 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:25.689 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:25.689 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:25.689 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:25.689 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:26.254 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:26.254 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:26.254 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:26.254 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:26.254 00:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:28.152 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:28.153 00:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:28.410 00:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:29.339 00:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.710 ************************************ 00:06:30.710 START TEST filesystem_ext4 00:06:30.710 ************************************ 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:30.710 mke2fs 1.46.5 (30-Dec-2021) 00:06:30.710 Discarding device blocks: 0/522240 done 00:06:30.710 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:30.710 Filesystem UUID: 5d6ebc28-68df-4be4-8e10-860beb0bb66a 00:06:30.710 Superblock backups stored on blocks: 00:06:30.710 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:30.710 00:06:30.710 Allocating group tables: 0/64 done 00:06:30.710 Writing inode tables: 0/64 done 00:06:30.710 Creating journal (8192 blocks): done 00:06:30.710 Writing superblocks and filesystem accounting information: 0/64 done 00:06:30.710 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:30.710 00:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2539056 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.641 00:06:31.641 real 0m1.206s 00:06:31.641 user 0m0.020s 00:06:31.641 sys 0m0.054s 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:31.641 ************************************ 00:06:31.641 END TEST filesystem_ext4 00:06:31.641 ************************************ 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.641 ************************************ 00:06:31.641 START TEST filesystem_btrfs 00:06:31.641 ************************************ 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:31.641 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:32.206 btrfs-progs v6.6.2 00:06:32.206 See https://btrfs.readthedocs.io for more information. 00:06:32.206 00:06:32.206 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:32.206 NOTE: several default settings have changed in version 5.15, please make sure 00:06:32.206 this does not affect your deployments: 00:06:32.206 - DUP for metadata (-m dup) 00:06:32.206 - enabled no-holes (-O no-holes) 00:06:32.206 - enabled free-space-tree (-R free-space-tree) 00:06:32.206 00:06:32.206 Label: (null) 00:06:32.206 UUID: b9befd0c-7efd-4de5-bc08-8b4959b7b186 00:06:32.206 Node size: 16384 00:06:32.206 Sector size: 4096 00:06:32.206 Filesystem size: 510.00MiB 00:06:32.206 Block group profiles: 00:06:32.206 Data: single 8.00MiB 00:06:32.206 Metadata: DUP 32.00MiB 00:06:32.206 System: DUP 8.00MiB 00:06:32.206 SSD detected: yes 00:06:32.206 Zoned device: no 00:06:32.206 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:32.206 Runtime features: free-space-tree 00:06:32.206 Checksum: crc32c 00:06:32.206 Number of devices: 1 00:06:32.206 Devices: 00:06:32.206 ID SIZE PATH 00:06:32.206 1 510.00MiB /dev/nvme0n1p1 00:06:32.206 00:06:32.206 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:32.206 00:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2539056 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:32.770 00:06:32.770 real 0m1.041s 00:06:32.770 user 0m0.014s 00:06:32.770 sys 0m0.124s 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:32.770 ************************************ 00:06:32.770 END TEST filesystem_btrfs 00:06:32.770 ************************************ 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.770 ************************************ 00:06:32.770 START TEST filesystem_xfs 00:06:32.770 ************************************ 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:32.770 00:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:32.770 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:32.770 = sectsz=512 attr=2, projid32bit=1 00:06:32.770 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:32.770 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:32.770 data = bsize=4096 blocks=130560, imaxpct=25 00:06:32.770 = sunit=0 swidth=0 blks 00:06:32.770 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:32.770 log =internal log bsize=4096 blocks=16384, version=2 00:06:32.770 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:32.770 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:34.143 Discarding blocks...Done. 00:06:34.143 00:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:34.143 00:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2539056 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:36.042 00:06:36.042 real 0m2.908s 00:06:36.042 user 0m0.012s 00:06:36.042 sys 0m0.063s 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:36.042 ************************************ 00:06:36.042 END TEST filesystem_xfs 00:06:36.042 ************************************ 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:36.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2539056 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2539056 ']' 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2539056 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2539056 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2539056' 00:06:36.042 killing process with pid 2539056 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2539056 00:06:36.042 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2539056 00:06:36.300 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:36.300 00:06:36.300 real 0m12.092s 00:06:36.300 user 0m46.373s 00:06:36.300 sys 0m1.884s 00:06:36.300 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.300 00:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.300 ************************************ 00:06:36.300 END TEST nvmf_filesystem_no_in_capsule 00:06:36.300 ************************************ 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.300 ************************************ 00:06:36.300 START TEST nvmf_filesystem_in_capsule 00:06:36.300 ************************************ 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2540623 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2540623 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2540623 ']' 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.300 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.559 [2024-07-16 00:43:11.086394] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:36.559 [2024-07-16 00:43:11.086480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.559 [2024-07-16 00:43:11.153419] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.559 [2024-07-16 00:43:11.272342] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.559 [2024-07-16 00:43:11.272409] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.559 [2024-07-16 00:43:11.272425] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.559 [2024-07-16 00:43:11.272439] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.559 [2024-07-16 00:43:11.272451] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.559 [2024-07-16 00:43:11.272514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.559 [2024-07-16 00:43:11.272570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.559 [2024-07-16 00:43:11.272621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.559 [2024-07-16 00:43:11.272625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.818 [2024-07-16 00:43:11.437976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.818 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 Malloc1 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 [2024-07-16 00:43:11.623276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:37.077 { 00:06:37.077 "name": "Malloc1", 00:06:37.077 "aliases": [ 00:06:37.077 "7f90584d-f809-4227-b20a-62dee7664c74" 00:06:37.077 ], 00:06:37.077 "product_name": "Malloc disk", 00:06:37.077 "block_size": 512, 00:06:37.077 "num_blocks": 1048576, 00:06:37.077 "uuid": "7f90584d-f809-4227-b20a-62dee7664c74", 00:06:37.077 "assigned_rate_limits": { 00:06:37.077 "rw_ios_per_sec": 0, 00:06:37.077 "rw_mbytes_per_sec": 0, 00:06:37.077 "r_mbytes_per_sec": 0, 00:06:37.077 "w_mbytes_per_sec": 0 00:06:37.077 }, 00:06:37.077 "claimed": true, 00:06:37.077 "claim_type": "exclusive_write", 00:06:37.077 "zoned": false, 00:06:37.077 "supported_io_types": { 00:06:37.077 "read": true, 00:06:37.077 "write": true, 00:06:37.077 "unmap": true, 00:06:37.077 "flush": true, 00:06:37.077 "reset": true, 00:06:37.077 "nvme_admin": false, 00:06:37.077 "nvme_io": false, 00:06:37.077 "nvme_io_md": false, 00:06:37.077 "write_zeroes": true, 00:06:37.077 "zcopy": true, 00:06:37.077 "get_zone_info": false, 00:06:37.077 "zone_management": false, 00:06:37.077 "zone_append": false, 00:06:37.077 "compare": false, 00:06:37.077 "compare_and_write": false, 00:06:37.077 "abort": true, 00:06:37.077 "seek_hole": false, 00:06:37.077 "seek_data": false, 00:06:37.077 "copy": true, 00:06:37.077 "nvme_iov_md": false 00:06:37.077 }, 00:06:37.077 "memory_domains": [ 00:06:37.077 { 00:06:37.077 "dma_device_id": "system", 00:06:37.077 "dma_device_type": 1 00:06:37.077 }, 00:06:37.077 { 00:06:37.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.077 "dma_device_type": 2 00:06:37.077 } 00:06:37.077 ], 00:06:37.077 "driver_specific": {} 00:06:37.077 } 00:06:37.077 ]' 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:37.077 00:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:37.679 00:43:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:37.679 00:43:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:37.679 00:43:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:37.679 00:43:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:37.679 00:43:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:40.203 00:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:40.461 00:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:41.394 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:41.394 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:41.394 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.394 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.394 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.652 ************************************ 00:06:41.652 START TEST filesystem_in_capsule_ext4 00:06:41.652 ************************************ 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:41.652 00:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:41.652 mke2fs 1.46.5 (30-Dec-2021) 00:06:41.652 Discarding device blocks: 0/522240 done 00:06:41.652 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:41.652 Filesystem UUID: 702d932c-85f9-4018-8fc1-0e38fc56ed05 00:06:41.652 Superblock backups stored on blocks: 00:06:41.652 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:41.652 00:06:41.652 Allocating group tables: 0/64 done 00:06:41.652 Writing inode tables: 0/64 done 00:06:41.910 Creating journal (8192 blocks): done 00:06:42.732 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:06:42.732 00:06:42.732 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:42.732 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2540623 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.990 00:06:42.990 real 0m1.472s 00:06:42.990 user 0m0.018s 00:06:42.990 sys 0m0.063s 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:42.990 ************************************ 00:06:42.990 END TEST filesystem_in_capsule_ext4 00:06:42.990 ************************************ 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.990 ************************************ 00:06:42.990 START TEST filesystem_in_capsule_btrfs 00:06:42.990 ************************************ 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:42.990 00:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:43.557 btrfs-progs v6.6.2 00:06:43.557 See https://btrfs.readthedocs.io for more information. 00:06:43.557 00:06:43.557 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:43.557 NOTE: several default settings have changed in version 5.15, please make sure 00:06:43.557 this does not affect your deployments: 00:06:43.557 - DUP for metadata (-m dup) 00:06:43.557 - enabled no-holes (-O no-holes) 00:06:43.557 - enabled free-space-tree (-R free-space-tree) 00:06:43.557 00:06:43.557 Label: (null) 00:06:43.557 UUID: 88868088-4a31-400c-ab7d-6bc3aafb9e3e 00:06:43.557 Node size: 16384 00:06:43.557 Sector size: 4096 00:06:43.557 Filesystem size: 510.00MiB 00:06:43.557 Block group profiles: 00:06:43.557 Data: single 8.00MiB 00:06:43.557 Metadata: DUP 32.00MiB 00:06:43.557 System: DUP 8.00MiB 00:06:43.557 SSD detected: yes 00:06:43.557 Zoned device: no 00:06:43.557 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:43.557 Runtime features: free-space-tree 00:06:43.557 Checksum: crc32c 00:06:43.557 Number of devices: 1 00:06:43.557 Devices: 00:06:43.557 ID SIZE PATH 00:06:43.557 1 510.00MiB /dev/nvme0n1p1 00:06:43.557 00:06:43.557 00:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:43.557 00:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2540623 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.491 00:06:44.491 real 0m1.391s 00:06:44.491 user 0m0.022s 00:06:44.491 sys 0m0.114s 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 ************************************ 00:06:44.491 END TEST filesystem_in_capsule_btrfs 00:06:44.491 ************************************ 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 ************************************ 00:06:44.491 START TEST filesystem_in_capsule_xfs 00:06:44.491 ************************************ 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:44.491 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:44.491 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:44.491 = sectsz=512 attr=2, projid32bit=1 00:06:44.491 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:44.491 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:44.491 data = bsize=4096 blocks=130560, imaxpct=25 00:06:44.491 = sunit=0 swidth=0 blks 00:06:44.491 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:44.491 log =internal log bsize=4096 blocks=16384, version=2 00:06:44.491 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:44.491 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:45.425 Discarding blocks...Done. 00:06:45.425 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:45.425 00:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2540623 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.321 00:06:47.321 real 0m2.724s 00:06:47.321 user 0m0.016s 00:06:47.321 sys 0m0.062s 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:47.321 ************************************ 00:06:47.321 END TEST filesystem_in_capsule_xfs 00:06:47.321 ************************************ 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:47.321 00:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:47.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2540623 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2540623 ']' 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2540623 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2540623 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2540623' 00:06:47.580 killing process with pid 2540623 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2540623 00:06:47.580 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2540623 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:48.147 00:06:48.147 real 0m11.762s 00:06:48.147 user 0m44.993s 00:06:48.147 sys 0m1.775s 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.147 ************************************ 00:06:48.147 END TEST nvmf_filesystem_in_capsule 00:06:48.147 ************************************ 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.147 rmmod nvme_tcp 00:06:48.147 rmmod nvme_fabrics 00:06:48.147 rmmod nvme_keyring 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.147 00:43:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.682 00:43:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:50.682 00:06:50.682 real 0m28.403s 00:06:50.682 user 1m32.259s 00:06:50.682 sys 0m5.328s 00:06:50.682 00:43:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.682 00:43:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.682 ************************************ 00:06:50.682 END TEST nvmf_filesystem 00:06:50.682 ************************************ 00:06:50.682 00:43:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:50.682 00:43:24 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:50.682 00:43:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.682 00:43:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.682 00:43:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.682 ************************************ 00:06:50.682 START TEST nvmf_target_discovery 00:06:50.682 ************************************ 00:06:50.682 00:43:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:50.682 * Looking for test storage... 00:06:50.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.682 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.683 00:43:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.592 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:52.593 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:52.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:52.593 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:52.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:52.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:06:52.593 00:06:52.593 --- 10.0.0.2 ping statistics --- 00:06:52.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.593 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:06:52.593 00:06:52.593 --- 10.0.0.1 ping statistics --- 00:06:52.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.593 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2544214 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2544214 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2544214 ']' 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.593 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.593 [2024-07-16 00:43:27.267327] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:52.593 [2024-07-16 00:43:27.267405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.593 [2024-07-16 00:43:27.337018] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.862 [2024-07-16 00:43:27.463152] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.862 [2024-07-16 00:43:27.463206] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.862 [2024-07-16 00:43:27.463233] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.862 [2024-07-16 00:43:27.463245] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.862 [2024-07-16 00:43:27.463255] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.862 [2024-07-16 00:43:27.463353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.862 [2024-07-16 00:43:27.463413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.862 [2024-07-16 00:43:27.463463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.862 [2024-07-16 00:43:27.463466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.862 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.862 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:52.862 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:52.862 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:52.862 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 [2024-07-16 00:43:27.627902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 Null1 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 [2024-07-16 00:43:27.668242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 Null2 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.124 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 Null3 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 Null4 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.125 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:53.383 00:06:53.383 Discovery Log Number of Records 6, Generation counter 6 00:06:53.383 =====Discovery Log Entry 0====== 00:06:53.383 trtype: tcp 00:06:53.383 adrfam: ipv4 00:06:53.383 subtype: current discovery subsystem 00:06:53.383 treq: not required 00:06:53.383 portid: 0 00:06:53.383 trsvcid: 4420 00:06:53.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:53.383 traddr: 10.0.0.2 00:06:53.383 eflags: explicit discovery connections, duplicate discovery information 00:06:53.383 sectype: none 00:06:53.383 =====Discovery Log Entry 1====== 00:06:53.383 trtype: tcp 00:06:53.383 adrfam: ipv4 00:06:53.383 subtype: nvme subsystem 00:06:53.383 treq: not required 00:06:53.383 portid: 0 00:06:53.383 trsvcid: 4420 00:06:53.383 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:53.383 traddr: 10.0.0.2 00:06:53.383 eflags: none 00:06:53.383 sectype: none 00:06:53.383 =====Discovery Log Entry 2====== 00:06:53.383 trtype: tcp 00:06:53.383 adrfam: ipv4 00:06:53.383 subtype: nvme subsystem 00:06:53.383 treq: not required 00:06:53.383 portid: 0 00:06:53.383 trsvcid: 4420 00:06:53.383 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:53.383 traddr: 10.0.0.2 00:06:53.383 eflags: none 00:06:53.383 sectype: none 00:06:53.383 =====Discovery Log Entry 3====== 00:06:53.383 trtype: tcp 00:06:53.383 adrfam: ipv4 00:06:53.383 subtype: nvme subsystem 00:06:53.383 treq: not required 00:06:53.383 portid: 0 00:06:53.383 trsvcid: 4420 00:06:53.383 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:53.383 traddr: 10.0.0.2 00:06:53.383 eflags: none 00:06:53.383 sectype: none 00:06:53.383 =====Discovery Log Entry 4====== 00:06:53.383 trtype: tcp 00:06:53.383 adrfam: ipv4 00:06:53.383 subtype: nvme subsystem 00:06:53.383 treq: not required 00:06:53.383 portid: 0 00:06:53.383 trsvcid: 4420 00:06:53.383 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:53.383 traddr: 10.0.0.2 00:06:53.383 eflags: none 00:06:53.383 sectype: none 00:06:53.383 =====Discovery Log Entry 5====== 00:06:53.383 trtype: tcp 00:06:53.383 adrfam: ipv4 00:06:53.383 subtype: discovery subsystem referral 00:06:53.383 treq: not required 00:06:53.383 portid: 0 00:06:53.383 trsvcid: 4430 00:06:53.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:53.383 traddr: 10.0.0.2 00:06:53.383 eflags: none 00:06:53.383 sectype: none 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:53.383 Perform nvmf subsystem discovery via RPC 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.383 [ 00:06:53.383 { 00:06:53.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:53.383 "subtype": "Discovery", 00:06:53.383 "listen_addresses": [ 00:06:53.383 { 00:06:53.383 "trtype": "TCP", 00:06:53.383 "adrfam": "IPv4", 00:06:53.383 "traddr": "10.0.0.2", 00:06:53.383 "trsvcid": "4420" 00:06:53.383 } 00:06:53.383 ], 00:06:53.383 "allow_any_host": true, 00:06:53.383 "hosts": [] 00:06:53.383 }, 00:06:53.383 { 00:06:53.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:53.383 "subtype": "NVMe", 00:06:53.383 "listen_addresses": [ 00:06:53.383 { 00:06:53.383 "trtype": "TCP", 00:06:53.383 "adrfam": "IPv4", 00:06:53.383 "traddr": "10.0.0.2", 00:06:53.383 "trsvcid": "4420" 00:06:53.383 } 00:06:53.383 ], 00:06:53.383 "allow_any_host": true, 00:06:53.383 "hosts": [], 00:06:53.383 "serial_number": "SPDK00000000000001", 00:06:53.383 "model_number": "SPDK bdev Controller", 00:06:53.383 "max_namespaces": 32, 00:06:53.383 "min_cntlid": 1, 00:06:53.383 "max_cntlid": 65519, 00:06:53.383 "namespaces": [ 00:06:53.383 { 00:06:53.383 "nsid": 1, 00:06:53.383 "bdev_name": "Null1", 00:06:53.383 "name": "Null1", 00:06:53.383 "nguid": "6EC3C713879F4A6985B193FE08A100D2", 00:06:53.383 "uuid": "6ec3c713-879f-4a69-85b1-93fe08a100d2" 00:06:53.383 } 00:06:53.383 ] 00:06:53.383 }, 00:06:53.383 { 00:06:53.383 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:53.383 "subtype": "NVMe", 00:06:53.383 "listen_addresses": [ 00:06:53.383 { 00:06:53.383 "trtype": "TCP", 00:06:53.383 "adrfam": "IPv4", 00:06:53.383 "traddr": "10.0.0.2", 00:06:53.383 "trsvcid": "4420" 00:06:53.383 } 00:06:53.383 ], 00:06:53.383 "allow_any_host": true, 00:06:53.383 "hosts": [], 00:06:53.383 "serial_number": "SPDK00000000000002", 00:06:53.383 "model_number": "SPDK bdev Controller", 00:06:53.383 "max_namespaces": 32, 00:06:53.383 "min_cntlid": 1, 00:06:53.383 "max_cntlid": 65519, 00:06:53.383 "namespaces": [ 00:06:53.383 { 00:06:53.383 "nsid": 1, 00:06:53.383 "bdev_name": "Null2", 00:06:53.383 "name": "Null2", 00:06:53.383 "nguid": "A8C5F1D1F1404202A8E62AF7F2C3D5B1", 00:06:53.383 "uuid": "a8c5f1d1-f140-4202-a8e6-2af7f2c3d5b1" 00:06:53.383 } 00:06:53.383 ] 00:06:53.383 }, 00:06:53.383 { 00:06:53.383 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:53.383 "subtype": "NVMe", 00:06:53.383 "listen_addresses": [ 00:06:53.383 { 00:06:53.383 "trtype": "TCP", 00:06:53.383 "adrfam": "IPv4", 00:06:53.383 "traddr": "10.0.0.2", 00:06:53.383 "trsvcid": "4420" 00:06:53.383 } 00:06:53.383 ], 00:06:53.383 "allow_any_host": true, 00:06:53.383 "hosts": [], 00:06:53.383 "serial_number": "SPDK00000000000003", 00:06:53.383 "model_number": "SPDK bdev Controller", 00:06:53.383 "max_namespaces": 32, 00:06:53.383 "min_cntlid": 1, 00:06:53.383 "max_cntlid": 65519, 00:06:53.383 "namespaces": [ 00:06:53.383 { 00:06:53.383 "nsid": 1, 00:06:53.383 "bdev_name": "Null3", 00:06:53.383 "name": "Null3", 00:06:53.383 "nguid": "86C5CA3B971A47DE98E08EDFC9B339C3", 00:06:53.383 "uuid": "86c5ca3b-971a-47de-98e0-8edfc9b339c3" 00:06:53.383 } 00:06:53.383 ] 00:06:53.383 }, 00:06:53.383 { 00:06:53.383 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:53.383 "subtype": "NVMe", 00:06:53.383 "listen_addresses": [ 00:06:53.383 { 00:06:53.383 "trtype": "TCP", 00:06:53.383 "adrfam": "IPv4", 00:06:53.383 "traddr": "10.0.0.2", 00:06:53.383 "trsvcid": "4420" 00:06:53.383 } 00:06:53.383 ], 00:06:53.383 "allow_any_host": true, 00:06:53.383 "hosts": [], 00:06:53.383 "serial_number": "SPDK00000000000004", 00:06:53.383 "model_number": "SPDK bdev Controller", 00:06:53.383 "max_namespaces": 32, 00:06:53.383 "min_cntlid": 1, 00:06:53.383 "max_cntlid": 65519, 00:06:53.383 "namespaces": [ 00:06:53.383 { 00:06:53.383 "nsid": 1, 00:06:53.383 "bdev_name": "Null4", 00:06:53.383 "name": "Null4", 00:06:53.383 "nguid": "2F85848C0A5049A9854358333C653133", 00:06:53.383 "uuid": "2f85848c-0a50-49a9-8543-58333c653133" 00:06:53.383 } 00:06:53.383 ] 00:06:53.383 } 00:06:53.383 ] 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.383 00:43:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.383 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.383 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:53.384 rmmod nvme_tcp 00:06:53.384 rmmod nvme_fabrics 00:06:53.384 rmmod nvme_keyring 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2544214 ']' 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2544214 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2544214 ']' 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2544214 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.384 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2544214 00:06:53.678 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.678 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.678 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2544214' 00:06:53.678 killing process with pid 2544214 00:06:53.678 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2544214 00:06:53.678 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2544214 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.937 00:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.847 00:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:55.847 00:06:55.847 real 0m5.508s 00:06:55.847 user 0m4.515s 00:06:55.847 sys 0m1.901s 00:06:55.847 00:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.847 00:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.847 ************************************ 00:06:55.847 END TEST nvmf_target_discovery 00:06:55.847 ************************************ 00:06:55.847 00:43:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:55.847 00:43:30 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:55.847 00:43:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.847 00:43:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.847 00:43:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.847 ************************************ 00:06:55.847 START TEST nvmf_referrals 00:06:55.847 ************************************ 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:55.847 * Looking for test storage... 00:06:55.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:55.847 00:43:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:58.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:58.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:58.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:58.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:58.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:06:58.378 00:06:58.378 --- 10.0.0.2 ping statistics --- 00:06:58.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.378 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:06:58.378 00:06:58.378 --- 10.0.0.1 ping statistics --- 00:06:58.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.378 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:58.378 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2546192 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2546192 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2546192 ']' 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.379 00:43:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 [2024-07-16 00:43:32.743040] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:06:58.379 [2024-07-16 00:43:32.743132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.379 [2024-07-16 00:43:32.822571] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.379 [2024-07-16 00:43:32.946799] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.379 [2024-07-16 00:43:32.946857] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.379 [2024-07-16 00:43:32.946873] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.379 [2024-07-16 00:43:32.946896] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.379 [2024-07-16 00:43:32.946908] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.379 [2024-07-16 00:43:32.946964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.379 [2024-07-16 00:43:32.946989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.379 [2024-07-16 00:43:32.947050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.379 [2024-07-16 00:43:32.947054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 [2024-07-16 00:43:33.099651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 [2024-07-16 00:43:33.111901] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.379 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.906 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.162 00:43:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:59.419 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.675 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:59.931 rmmod nvme_tcp 00:06:59.931 rmmod nvme_fabrics 00:06:59.931 rmmod nvme_keyring 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2546192 ']' 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2546192 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2546192 ']' 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2546192 00:06:59.931 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:00.188 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.188 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2546192 00:07:00.188 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.188 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.188 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2546192' 00:07:00.188 killing process with pid 2546192 00:07:00.188 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2546192 00:07:00.188 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2546192 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.447 00:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.352 00:43:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.352 00:07:02.352 real 0m6.492s 00:07:02.352 user 0m9.386s 00:07:02.352 sys 0m2.066s 00:07:02.352 00:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.352 00:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.352 ************************************ 00:07:02.352 END TEST nvmf_referrals 00:07:02.352 ************************************ 00:07:02.352 00:43:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:02.352 00:43:37 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:02.352 00:43:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.352 00:43:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.352 00:43:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.352 ************************************ 00:07:02.352 START TEST nvmf_connect_disconnect 00:07:02.352 ************************************ 00:07:02.352 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:02.610 * Looking for test storage... 00:07:02.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:02.610 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.611 00:43:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.514 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.515 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.515 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.515 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:04.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:07:04.515 00:07:04.515 --- 10.0.0.2 ping statistics --- 00:07:04.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.515 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:04.515 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:04.774 00:07:04.774 --- 10.0.0.1 ping statistics --- 00:07:04.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.774 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:04.774 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2548484 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2548484 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2548484 ']' 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.775 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:04.775 [2024-07-16 00:43:39.356344] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:07:04.775 [2024-07-16 00:43:39.356441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.775 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.775 [2024-07-16 00:43:39.422058] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.033 [2024-07-16 00:43:39.533622] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.034 [2024-07-16 00:43:39.533680] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.034 [2024-07-16 00:43:39.533693] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.034 [2024-07-16 00:43:39.533704] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.034 [2024-07-16 00:43:39.533728] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.034 [2024-07-16 00:43:39.533825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.034 [2024-07-16 00:43:39.533886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.034 [2024-07-16 00:43:39.533935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.034 [2024-07-16 00:43:39.533939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.034 [2024-07-16 00:43:39.677568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.034 [2024-07-16 00:43:39.729213] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:05.034 00:43:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:08.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.205 rmmod nvme_tcp 00:07:19.205 rmmod nvme_fabrics 00:07:19.205 rmmod nvme_keyring 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2548484 ']' 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2548484 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2548484 ']' 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2548484 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2548484 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2548484' 00:07:19.205 killing process with pid 2548484 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2548484 00:07:19.205 00:43:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2548484 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.464 00:43:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.370 00:43:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:21.370 00:07:21.370 real 0m18.981s 00:07:21.370 user 0m57.223s 00:07:21.370 sys 0m3.287s 00:07:21.370 00:43:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.370 00:43:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:21.370 ************************************ 00:07:21.370 END TEST nvmf_connect_disconnect 00:07:21.370 ************************************ 00:07:21.370 00:43:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:21.370 00:43:56 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:21.370 00:43:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.370 00:43:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.370 00:43:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.370 ************************************ 00:07:21.370 START TEST nvmf_multitarget 00:07:21.370 ************************************ 00:07:21.370 00:43:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:21.629 * Looking for test storage... 00:07:21.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:21.629 00:43:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:23.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:23.549 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:23.549 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:23.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.549 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.550 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.550 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.550 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:07:23.809 00:07:23.809 --- 10.0.0.2 ping statistics --- 00:07:23.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.809 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:07:23.809 00:07:23.809 --- 10.0.0.1 ping statistics --- 00:07:23.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.809 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2552255 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2552255 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2552255 ']' 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.809 00:43:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:23.809 [2024-07-16 00:43:58.471892] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:07:23.809 [2024-07-16 00:43:58.471999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.809 [2024-07-16 00:43:58.541958] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.068 [2024-07-16 00:43:58.663487] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.068 [2024-07-16 00:43:58.663549] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.068 [2024-07-16 00:43:58.663575] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.068 [2024-07-16 00:43:58.663590] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.068 [2024-07-16 00:43:58.663601] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.068 [2024-07-16 00:43:58.663681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.068 [2024-07-16 00:43:58.663735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.068 [2024-07-16 00:43:58.663787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.068 [2024-07-16 00:43:58.663790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:25.001 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:25.002 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:25.002 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:25.002 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:25.002 "nvmf_tgt_1" 00:07:25.002 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:25.260 "nvmf_tgt_2" 00:07:25.260 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:25.260 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:25.260 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:25.260 00:43:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:25.518 true 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:25.518 true 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.518 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.518 rmmod nvme_tcp 00:07:25.776 rmmod nvme_fabrics 00:07:25.776 rmmod nvme_keyring 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2552255 ']' 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2552255 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2552255 ']' 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2552255 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2552255 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2552255' 00:07:25.776 killing process with pid 2552255 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2552255 00:07:25.776 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2552255 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.036 00:44:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.942 00:44:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:27.942 00:07:27.942 real 0m6.559s 00:07:27.942 user 0m9.440s 00:07:27.942 sys 0m1.985s 00:07:27.942 00:44:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.942 00:44:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:27.942 ************************************ 00:07:27.942 END TEST nvmf_multitarget 00:07:27.942 ************************************ 00:07:27.942 00:44:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:27.942 00:44:02 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:27.942 00:44:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.942 00:44:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.942 00:44:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.200 ************************************ 00:07:28.200 START TEST nvmf_rpc 00:07:28.200 ************************************ 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:28.200 * Looking for test storage... 00:07:28.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.200 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:28.201 00:44:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:30.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:30.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.101 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:30.102 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:30.102 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:30.102 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:30.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:30.369 00:07:30.369 --- 10.0.0.2 ping statistics --- 00:07:30.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.369 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:07:30.369 00:07:30.369 --- 10.0.0.1 ping statistics --- 00:07:30.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.369 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2554457 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2554457 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2554457 ']' 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.369 00:44:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.369 [2024-07-16 00:44:04.976093] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:07:30.369 [2024-07-16 00:44:04.976168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.369 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.369 [2024-07-16 00:44:05.044951] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.656 [2024-07-16 00:44:05.168302] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.656 [2024-07-16 00:44:05.168360] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.656 [2024-07-16 00:44:05.168377] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.656 [2024-07-16 00:44:05.168390] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.656 [2024-07-16 00:44:05.168401] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.656 [2024-07-16 00:44:05.168457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.656 [2024-07-16 00:44:05.168512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.656 [2024-07-16 00:44:05.168578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.656 [2024-07-16 00:44:05.168581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.220 00:44:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.220 00:44:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:31.220 00:44:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.220 00:44:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.220 00:44:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 00:44:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.479 00:44:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:31.479 00:44:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.479 00:44:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:31.479 "tick_rate": 2700000000, 00:07:31.479 "poll_groups": [ 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_000", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [] 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_001", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [] 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_002", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [] 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_003", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [] 00:07:31.479 } 00:07:31.479 ] 00:07:31.479 }' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 [2024-07-16 00:44:06.087570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:31.479 "tick_rate": 2700000000, 00:07:31.479 "poll_groups": [ 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_000", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [ 00:07:31.479 { 00:07:31.479 "trtype": "TCP" 00:07:31.479 } 00:07:31.479 ] 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_001", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [ 00:07:31.479 { 00:07:31.479 "trtype": "TCP" 00:07:31.479 } 00:07:31.479 ] 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_002", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [ 00:07:31.479 { 00:07:31.479 "trtype": "TCP" 00:07:31.479 } 00:07:31.479 ] 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "nvmf_tgt_poll_group_003", 00:07:31.479 "admin_qpairs": 0, 00:07:31.479 "io_qpairs": 0, 00:07:31.479 "current_admin_qpairs": 0, 00:07:31.479 "current_io_qpairs": 0, 00:07:31.479 "pending_bdev_io": 0, 00:07:31.479 "completed_nvme_io": 0, 00:07:31.479 "transports": [ 00:07:31.479 { 00:07:31.479 "trtype": "TCP" 00:07:31.479 } 00:07:31.479 ] 00:07:31.479 } 00:07:31.479 ] 00:07:31.479 }' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 Malloc1 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.479 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.480 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.480 [2024-07-16 00:44:06.235892] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:31.737 [2024-07-16 00:44:06.258329] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:31.737 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:31.737 could not add new controller: failed to write to nvme-fabrics device 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.737 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.303 00:44:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.303 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.303 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.303 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.303 00:44:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.829 00:44:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.829 00:44:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.829 00:44:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.829 00:44:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.829 00:44:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.829 00:44:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:34.829 00:44:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.829 [2024-07-16 00:44:09.088360] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:34.829 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:34.829 could not add new controller: failed to write to nvme-fabrics device 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.829 00:44:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.087 00:44:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.087 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:35.087 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.087 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:35.087 00:44:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:36.982 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:36.982 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:36.982 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.982 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:36.982 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.982 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:36.982 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 [2024-07-16 00:44:11.828118] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.239 00:44:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:37.802 00:44:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.802 00:44:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:37.802 00:44:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.802 00:44:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:37.802 00:44:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:39.694 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:39.694 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:39.694 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:39.951 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:39.951 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:39.951 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:39.951 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:39.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.951 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:39.951 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.952 [2024-07-16 00:44:14.593510] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.952 00:44:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:40.884 00:44:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.884 00:44:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:40.884 00:44:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.884 00:44:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:40.884 00:44:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.782 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 [2024-07-16 00:44:17.398218] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.783 00:44:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.349 00:44:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.349 00:44:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:43.349 00:44:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.350 00:44:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:43.350 00:44:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.875 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.876 [2024-07-16 00:44:20.223840] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.876 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.134 00:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.134 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:46.134 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.134 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:46.134 00:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.704 [2024-07-16 00:44:22.995938] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.704 00:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.704 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.704 00:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.704 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.704 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.704 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.704 00:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.960 00:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:48.960 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:48.960 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:48.960 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:48.960 00:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.485 [2024-07-16 00:44:25.810430] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.485 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 [2024-07-16 00:44:25.858493] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 [2024-07-16 00:44:25.906650] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 [2024-07-16 00:44:25.954817] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 [2024-07-16 00:44:26.002998] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.486 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:51.486 "tick_rate": 2700000000, 00:07:51.486 "poll_groups": [ 00:07:51.486 { 00:07:51.486 "name": "nvmf_tgt_poll_group_000", 00:07:51.486 "admin_qpairs": 2, 00:07:51.486 "io_qpairs": 84, 00:07:51.486 "current_admin_qpairs": 0, 00:07:51.486 "current_io_qpairs": 0, 00:07:51.486 "pending_bdev_io": 0, 00:07:51.486 "completed_nvme_io": 135, 00:07:51.486 "transports": [ 00:07:51.486 { 00:07:51.486 "trtype": "TCP" 00:07:51.486 } 00:07:51.486 ] 00:07:51.486 }, 00:07:51.486 { 00:07:51.486 "name": "nvmf_tgt_poll_group_001", 00:07:51.486 "admin_qpairs": 2, 00:07:51.486 "io_qpairs": 84, 00:07:51.486 "current_admin_qpairs": 0, 00:07:51.486 "current_io_qpairs": 0, 00:07:51.486 "pending_bdev_io": 0, 00:07:51.486 "completed_nvme_io": 134, 00:07:51.486 "transports": [ 00:07:51.486 { 00:07:51.486 "trtype": "TCP" 00:07:51.486 } 00:07:51.486 ] 00:07:51.486 }, 00:07:51.486 { 00:07:51.486 "name": "nvmf_tgt_poll_group_002", 00:07:51.486 "admin_qpairs": 1, 00:07:51.486 "io_qpairs": 84, 00:07:51.487 "current_admin_qpairs": 0, 00:07:51.487 "current_io_qpairs": 0, 00:07:51.487 "pending_bdev_io": 0, 00:07:51.487 "completed_nvme_io": 233, 00:07:51.487 "transports": [ 00:07:51.487 { 00:07:51.487 "trtype": "TCP" 00:07:51.487 } 00:07:51.487 ] 00:07:51.487 }, 00:07:51.487 { 00:07:51.487 "name": "nvmf_tgt_poll_group_003", 00:07:51.487 "admin_qpairs": 2, 00:07:51.487 "io_qpairs": 84, 00:07:51.487 "current_admin_qpairs": 0, 00:07:51.487 "current_io_qpairs": 0, 00:07:51.487 "pending_bdev_io": 0, 00:07:51.487 "completed_nvme_io": 184, 00:07:51.487 "transports": [ 00:07:51.487 { 00:07:51.487 "trtype": "TCP" 00:07:51.487 } 00:07:51.487 ] 00:07:51.487 } 00:07:51.487 ] 00:07:51.487 }' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.487 rmmod nvme_tcp 00:07:51.487 rmmod nvme_fabrics 00:07:51.487 rmmod nvme_keyring 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2554457 ']' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2554457 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2554457 ']' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2554457 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2554457 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2554457' 00:07:51.487 killing process with pid 2554457 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2554457 00:07:51.487 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2554457 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.055 00:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.960 00:44:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.960 00:07:53.960 real 0m25.872s 00:07:53.960 user 1m24.734s 00:07:53.960 sys 0m4.083s 00:07:53.960 00:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.960 00:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.960 ************************************ 00:07:53.960 END TEST nvmf_rpc 00:07:53.960 ************************************ 00:07:53.960 00:44:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:53.960 00:44:28 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:53.960 00:44:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.960 00:44:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.960 00:44:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.960 ************************************ 00:07:53.960 START TEST nvmf_invalid 00:07:53.960 ************************************ 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:53.960 * Looking for test storage... 00:07:53.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.960 00:44:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.218 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.218 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.218 00:44:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.218 00:44:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:56.120 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:56.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:56.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:56.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.120 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:56.379 00:07:56.379 --- 10.0.0.2 ping statistics --- 00:07:56.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.379 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:56.379 00:07:56.379 --- 10.0.0.1 ping statistics --- 00:07:56.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.379 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2558994 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2558994 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2558994 ']' 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.379 00:44:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 [2024-07-16 00:44:31.044965] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:07:56.379 [2024-07-16 00:44:31.045057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.379 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.379 [2024-07-16 00:44:31.113945] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.638 [2024-07-16 00:44:31.237938] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.638 [2024-07-16 00:44:31.237999] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.638 [2024-07-16 00:44:31.238015] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.638 [2024-07-16 00:44:31.238029] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.638 [2024-07-16 00:44:31.238040] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.638 [2024-07-16 00:44:31.238112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.638 [2024-07-16 00:44:31.238174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.638 [2024-07-16 00:44:31.238243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.638 [2024-07-16 00:44:31.238245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6993 00:07:57.572 [2024-07-16 00:44:32.287847] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:57.572 { 00:07:57.572 "nqn": "nqn.2016-06.io.spdk:cnode6993", 00:07:57.572 "tgt_name": "foobar", 00:07:57.572 "method": "nvmf_create_subsystem", 00:07:57.572 "req_id": 1 00:07:57.572 } 00:07:57.572 Got JSON-RPC error response 00:07:57.572 response: 00:07:57.572 { 00:07:57.572 "code": -32603, 00:07:57.572 "message": "Unable to find target foobar" 00:07:57.572 }' 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:57.572 { 00:07:57.572 "nqn": "nqn.2016-06.io.spdk:cnode6993", 00:07:57.572 "tgt_name": "foobar", 00:07:57.572 "method": "nvmf_create_subsystem", 00:07:57.572 "req_id": 1 00:07:57.572 } 00:07:57.572 Got JSON-RPC error response 00:07:57.572 response: 00:07:57.572 { 00:07:57.572 "code": -32603, 00:07:57.572 "message": "Unable to find target foobar" 00:07:57.572 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:57.572 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6315 00:07:57.829 [2024-07-16 00:44:32.540740] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6315: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:57.829 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:57.829 { 00:07:57.829 "nqn": "nqn.2016-06.io.spdk:cnode6315", 00:07:57.829 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:57.829 "method": "nvmf_create_subsystem", 00:07:57.829 "req_id": 1 00:07:57.829 } 00:07:57.829 Got JSON-RPC error response 00:07:57.829 response: 00:07:57.829 { 00:07:57.829 "code": -32602, 00:07:57.829 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:57.829 }' 00:07:57.829 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:57.829 { 00:07:57.829 "nqn": "nqn.2016-06.io.spdk:cnode6315", 00:07:57.829 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:57.829 "method": "nvmf_create_subsystem", 00:07:57.829 "req_id": 1 00:07:57.829 } 00:07:57.829 Got JSON-RPC error response 00:07:57.829 response: 00:07:57.829 { 00:07:57.829 "code": -32602, 00:07:57.829 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:57.829 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:57.829 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:57.829 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6352 00:07:58.086 [2024-07-16 00:44:32.797564] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6352: invalid model number 'SPDK_Controller' 00:07:58.086 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:58.086 { 00:07:58.086 "nqn": "nqn.2016-06.io.spdk:cnode6352", 00:07:58.086 "model_number": "SPDK_Controller\u001f", 00:07:58.086 "method": "nvmf_create_subsystem", 00:07:58.086 "req_id": 1 00:07:58.086 } 00:07:58.086 Got JSON-RPC error response 00:07:58.086 response: 00:07:58.086 { 00:07:58.086 "code": -32602, 00:07:58.086 "message": "Invalid MN SPDK_Controller\u001f" 00:07:58.086 }' 00:07:58.086 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:58.086 { 00:07:58.086 "nqn": "nqn.2016-06.io.spdk:cnode6352", 00:07:58.086 "model_number": "SPDK_Controller\u001f", 00:07:58.086 "method": "nvmf_create_subsystem", 00:07:58.086 "req_id": 1 00:07:58.086 } 00:07:58.086 Got JSON-RPC error response 00:07:58.086 response: 00:07:58.086 { 00:07:58.086 "code": -32602, 00:07:58.086 "message": "Invalid MN SPDK_Controller\u001f" 00:07:58.086 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:58.086 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:58.086 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:58.087 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:58.344 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'RFT|Y"V\29jSMI^V3+\ Z' 00:07:58.345 00:44:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'RFT|Y"V\29jSMI^V3+\ Z' nqn.2016-06.io.spdk:cnode18630 00:07:58.603 [2024-07-16 00:44:33.118643] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18630: invalid serial number 'RFT|Y"V\29jSMI^V3+\ Z' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:58.604 { 00:07:58.604 "nqn": "nqn.2016-06.io.spdk:cnode18630", 00:07:58.604 "serial_number": "RFT|Y\"V\\29jSMI^V3+\\ Z", 00:07:58.604 "method": "nvmf_create_subsystem", 00:07:58.604 "req_id": 1 00:07:58.604 } 00:07:58.604 Got JSON-RPC error response 00:07:58.604 response: 00:07:58.604 { 00:07:58.604 "code": -32602, 00:07:58.604 "message": "Invalid SN RFT|Y\"V\\29jSMI^V3+\\ Z" 00:07:58.604 }' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:58.604 { 00:07:58.604 "nqn": "nqn.2016-06.io.spdk:cnode18630", 00:07:58.604 "serial_number": "RFT|Y\"V\\29jSMI^V3+\\ Z", 00:07:58.604 "method": "nvmf_create_subsystem", 00:07:58.604 "req_id": 1 00:07:58.604 } 00:07:58.604 Got JSON-RPC error response 00:07:58.604 response: 00:07:58.604 { 00:07:58.604 "code": -32602, 00:07:58.604 "message": "Invalid SN RFT|Y\"V\\29jSMI^V3+\\ Z" 00:07:58.604 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:58.604 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '28CT;tq*RLPb(pp*Gh8`zg'\''=Zd$^bzd&|@|j8uOy' 00:07:58.605 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '28CT;tq*RLPb(pp*Gh8`zg'\''=Zd$^bzd&|@|j8uOy' nqn.2016-06.io.spdk:cnode23477 00:07:58.862 [2024-07-16 00:44:33.515982] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23477: invalid model number '28CT;tq*RLPb(pp*Gh8`zg'=Zd$^bzd&|@|j8uOy' 00:07:58.862 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:58.862 { 00:07:58.862 "nqn": "nqn.2016-06.io.spdk:cnode23477", 00:07:58.862 "model_number": "28CT;tq*RLPb(pp*Gh8\u007f`zg'\''=Zd$^bzd&|@|j8uOy", 00:07:58.862 "method": "nvmf_create_subsystem", 00:07:58.862 "req_id": 1 00:07:58.863 } 00:07:58.863 Got JSON-RPC error response 00:07:58.863 response: 00:07:58.863 { 00:07:58.863 "code": -32602, 00:07:58.863 "message": "Invalid MN 28CT;tq*RLPb(pp*Gh8\u007f`zg'\''=Zd$^bzd&|@|j8uOy" 00:07:58.863 }' 00:07:58.863 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:58.863 { 00:07:58.863 "nqn": "nqn.2016-06.io.spdk:cnode23477", 00:07:58.863 "model_number": "28CT;tq*RLPb(pp*Gh8\u007f`zg'=Zd$^bzd&|@|j8uOy", 00:07:58.863 "method": "nvmf_create_subsystem", 00:07:58.863 "req_id": 1 00:07:58.863 } 00:07:58.863 Got JSON-RPC error response 00:07:58.863 response: 00:07:58.863 { 00:07:58.863 "code": -32602, 00:07:58.863 "message": "Invalid MN 28CT;tq*RLPb(pp*Gh8\u007f`zg'=Zd$^bzd&|@|j8uOy" 00:07:58.863 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:58.863 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:59.121 [2024-07-16 00:44:33.760886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.121 00:44:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:59.378 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:59.378 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:59.378 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:59.378 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:59.378 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:59.636 [2024-07-16 00:44:34.250438] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:59.636 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:59.636 { 00:07:59.636 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:59.636 "listen_address": { 00:07:59.636 "trtype": "tcp", 00:07:59.636 "traddr": "", 00:07:59.636 "trsvcid": "4421" 00:07:59.636 }, 00:07:59.636 "method": "nvmf_subsystem_remove_listener", 00:07:59.636 "req_id": 1 00:07:59.636 } 00:07:59.636 Got JSON-RPC error response 00:07:59.636 response: 00:07:59.636 { 00:07:59.636 "code": -32602, 00:07:59.636 "message": "Invalid parameters" 00:07:59.636 }' 00:07:59.636 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:59.636 { 00:07:59.636 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:59.636 "listen_address": { 00:07:59.636 "trtype": "tcp", 00:07:59.636 "traddr": "", 00:07:59.636 "trsvcid": "4421" 00:07:59.636 }, 00:07:59.636 "method": "nvmf_subsystem_remove_listener", 00:07:59.636 "req_id": 1 00:07:59.636 } 00:07:59.636 Got JSON-RPC error response 00:07:59.636 response: 00:07:59.636 { 00:07:59.636 "code": -32602, 00:07:59.636 "message": "Invalid parameters" 00:07:59.636 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:59.636 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode203 -i 0 00:07:59.893 [2024-07-16 00:44:34.499220] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode203: invalid cntlid range [0-65519] 00:07:59.893 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:59.893 { 00:07:59.893 "nqn": "nqn.2016-06.io.spdk:cnode203", 00:07:59.893 "min_cntlid": 0, 00:07:59.893 "method": "nvmf_create_subsystem", 00:07:59.893 "req_id": 1 00:07:59.893 } 00:07:59.893 Got JSON-RPC error response 00:07:59.893 response: 00:07:59.893 { 00:07:59.893 "code": -32602, 00:07:59.893 "message": "Invalid cntlid range [0-65519]" 00:07:59.893 }' 00:07:59.893 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:59.893 { 00:07:59.893 "nqn": "nqn.2016-06.io.spdk:cnode203", 00:07:59.893 "min_cntlid": 0, 00:07:59.893 "method": "nvmf_create_subsystem", 00:07:59.893 "req_id": 1 00:07:59.893 } 00:07:59.893 Got JSON-RPC error response 00:07:59.893 response: 00:07:59.893 { 00:07:59.893 "code": -32602, 00:07:59.893 "message": "Invalid cntlid range [0-65519]" 00:07:59.893 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:59.893 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17630 -i 65520 00:08:00.149 [2024-07-16 00:44:34.744017] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17630: invalid cntlid range [65520-65519] 00:08:00.149 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:00.149 { 00:08:00.149 "nqn": "nqn.2016-06.io.spdk:cnode17630", 00:08:00.149 "min_cntlid": 65520, 00:08:00.149 "method": "nvmf_create_subsystem", 00:08:00.149 "req_id": 1 00:08:00.149 } 00:08:00.149 Got JSON-RPC error response 00:08:00.149 response: 00:08:00.149 { 00:08:00.149 "code": -32602, 00:08:00.149 "message": "Invalid cntlid range [65520-65519]" 00:08:00.149 }' 00:08:00.149 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:00.149 { 00:08:00.149 "nqn": "nqn.2016-06.io.spdk:cnode17630", 00:08:00.149 "min_cntlid": 65520, 00:08:00.149 "method": "nvmf_create_subsystem", 00:08:00.149 "req_id": 1 00:08:00.149 } 00:08:00.149 Got JSON-RPC error response 00:08:00.149 response: 00:08:00.149 { 00:08:00.149 "code": -32602, 00:08:00.149 "message": "Invalid cntlid range [65520-65519]" 00:08:00.149 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:00.149 00:44:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1738 -I 0 00:08:00.405 [2024-07-16 00:44:34.988834] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1738: invalid cntlid range [1-0] 00:08:00.405 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:00.405 { 00:08:00.405 "nqn": "nqn.2016-06.io.spdk:cnode1738", 00:08:00.405 "max_cntlid": 0, 00:08:00.405 "method": "nvmf_create_subsystem", 00:08:00.405 "req_id": 1 00:08:00.405 } 00:08:00.405 Got JSON-RPC error response 00:08:00.405 response: 00:08:00.405 { 00:08:00.405 "code": -32602, 00:08:00.405 "message": "Invalid cntlid range [1-0]" 00:08:00.405 }' 00:08:00.405 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:00.405 { 00:08:00.405 "nqn": "nqn.2016-06.io.spdk:cnode1738", 00:08:00.405 "max_cntlid": 0, 00:08:00.405 "method": "nvmf_create_subsystem", 00:08:00.405 "req_id": 1 00:08:00.405 } 00:08:00.405 Got JSON-RPC error response 00:08:00.405 response: 00:08:00.405 { 00:08:00.405 "code": -32602, 00:08:00.405 "message": "Invalid cntlid range [1-0]" 00:08:00.405 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:00.405 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10565 -I 65520 00:08:00.662 [2024-07-16 00:44:35.229643] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10565: invalid cntlid range [1-65520] 00:08:00.662 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:00.662 { 00:08:00.662 "nqn": "nqn.2016-06.io.spdk:cnode10565", 00:08:00.662 "max_cntlid": 65520, 00:08:00.662 "method": "nvmf_create_subsystem", 00:08:00.662 "req_id": 1 00:08:00.662 } 00:08:00.662 Got JSON-RPC error response 00:08:00.662 response: 00:08:00.662 { 00:08:00.662 "code": -32602, 00:08:00.662 "message": "Invalid cntlid range [1-65520]" 00:08:00.662 }' 00:08:00.662 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:00.662 { 00:08:00.662 "nqn": "nqn.2016-06.io.spdk:cnode10565", 00:08:00.662 "max_cntlid": 65520, 00:08:00.662 "method": "nvmf_create_subsystem", 00:08:00.662 "req_id": 1 00:08:00.662 } 00:08:00.662 Got JSON-RPC error response 00:08:00.662 response: 00:08:00.662 { 00:08:00.662 "code": -32602, 00:08:00.662 "message": "Invalid cntlid range [1-65520]" 00:08:00.662 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:00.662 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8234 -i 6 -I 5 00:08:00.918 [2024-07-16 00:44:35.470484] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8234: invalid cntlid range [6-5] 00:08:00.918 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:00.918 { 00:08:00.918 "nqn": "nqn.2016-06.io.spdk:cnode8234", 00:08:00.918 "min_cntlid": 6, 00:08:00.918 "max_cntlid": 5, 00:08:00.918 "method": "nvmf_create_subsystem", 00:08:00.918 "req_id": 1 00:08:00.918 } 00:08:00.918 Got JSON-RPC error response 00:08:00.918 response: 00:08:00.918 { 00:08:00.918 "code": -32602, 00:08:00.918 "message": "Invalid cntlid range [6-5]" 00:08:00.918 }' 00:08:00.918 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:00.918 { 00:08:00.918 "nqn": "nqn.2016-06.io.spdk:cnode8234", 00:08:00.918 "min_cntlid": 6, 00:08:00.918 "max_cntlid": 5, 00:08:00.918 "method": "nvmf_create_subsystem", 00:08:00.918 "req_id": 1 00:08:00.918 } 00:08:00.918 Got JSON-RPC error response 00:08:00.918 response: 00:08:00.918 { 00:08:00.918 "code": -32602, 00:08:00.918 "message": "Invalid cntlid range [6-5]" 00:08:00.918 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:00.918 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:00.918 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:00.918 { 00:08:00.918 "name": "foobar", 00:08:00.919 "method": "nvmf_delete_target", 00:08:00.919 "req_id": 1 00:08:00.919 } 00:08:00.919 Got JSON-RPC error response 00:08:00.919 response: 00:08:00.919 { 00:08:00.919 "code": -32602, 00:08:00.919 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:00.919 }' 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:00.919 { 00:08:00.919 "name": "foobar", 00:08:00.919 "method": "nvmf_delete_target", 00:08:00.919 "req_id": 1 00:08:00.919 } 00:08:00.919 Got JSON-RPC error response 00:08:00.919 response: 00:08:00.919 { 00:08:00.919 "code": -32602, 00:08:00.919 "message": "The specified target doesn't exist, cannot delete it." 00:08:00.919 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.919 rmmod nvme_tcp 00:08:00.919 rmmod nvme_fabrics 00:08:00.919 rmmod nvme_keyring 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2558994 ']' 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2558994 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2558994 ']' 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2558994 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.919 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2558994 00:08:01.176 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:01.176 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:01.176 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2558994' 00:08:01.176 killing process with pid 2558994 00:08:01.176 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2558994 00:08:01.176 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2558994 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.433 00:44:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.329 00:44:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.329 00:08:03.329 real 0m9.359s 00:08:03.329 user 0m22.533s 00:08:03.329 sys 0m2.517s 00:08:03.329 00:44:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.329 00:44:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:03.329 ************************************ 00:08:03.329 END TEST nvmf_invalid 00:08:03.329 ************************************ 00:08:03.329 00:44:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:03.329 00:44:38 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:03.329 00:44:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.329 00:44:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.329 00:44:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.329 ************************************ 00:08:03.329 START TEST nvmf_abort 00:08:03.329 ************************************ 00:08:03.329 00:44:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:03.619 * Looking for test storage... 00:08:03.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.619 00:44:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.526 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:05.527 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:05.527 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:05.527 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:05.527 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:08:05.527 00:08:05.527 --- 10.0.0.2 ping statistics --- 00:08:05.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.527 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:05.527 00:08:05.527 --- 10.0.0.1 ping statistics --- 00:08:05.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.527 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2561750 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2561750 00:08:05.527 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2561750 ']' 00:08:05.528 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.528 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.528 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.528 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.528 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 [2024-07-16 00:44:40.307488] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:08:05.787 [2024-07-16 00:44:40.307581] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.787 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.787 [2024-07-16 00:44:40.383523] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.787 [2024-07-16 00:44:40.508552] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.787 [2024-07-16 00:44:40.508616] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.787 [2024-07-16 00:44:40.508632] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.787 [2024-07-16 00:44:40.508644] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.787 [2024-07-16 00:44:40.508657] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.787 [2024-07-16 00:44:40.508744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.787 [2024-07-16 00:44:40.508795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.787 [2024-07-16 00:44:40.508799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 [2024-07-16 00:44:40.653838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 Malloc0 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 Delay0 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 [2024-07-16 00:44:40.725680] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.046 00:44:40 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:06.046 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.305 [2024-07-16 00:44:40.862020] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:08.208 Initializing NVMe Controllers 00:08:08.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:08.208 controller IO queue size 128 less than required 00:08:08.209 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:08.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:08.209 Initialization complete. Launching workers. 00:08:08.209 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32408 00:08:08.209 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32469, failed to submit 62 00:08:08.209 success 32412, unsuccess 57, failed 0 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.209 00:44:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.209 rmmod nvme_tcp 00:08:08.467 rmmod nvme_fabrics 00:08:08.467 rmmod nvme_keyring 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2561750 ']' 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2561750 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2561750 ']' 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2561750 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2561750 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2561750' 00:08:08.467 killing process with pid 2561750 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2561750 00:08:08.467 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2561750 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.724 00:44:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.634 00:44:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.893 00:08:10.893 real 0m7.349s 00:08:10.893 user 0m10.746s 00:08:10.893 sys 0m2.507s 00:08:10.893 00:44:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.893 00:44:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.893 ************************************ 00:08:10.893 END TEST nvmf_abort 00:08:10.893 ************************************ 00:08:10.893 00:44:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.893 00:44:45 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:10.893 00:44:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.893 00:44:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.893 00:44:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.893 ************************************ 00:08:10.893 START TEST nvmf_ns_hotplug_stress 00:08:10.893 ************************************ 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:10.893 * Looking for test storage... 00:08:10.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.893 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.894 00:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:13.428 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:13.428 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:13.428 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:13.428 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:13.428 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:08:13.429 00:08:13.429 --- 10.0.0.2 ping statistics --- 00:08:13.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.429 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:08:13.429 00:08:13.429 --- 10.0.0.1 ping statistics --- 00:08:13.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.429 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2563975 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2563975 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2563975 ']' 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.429 00:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.429 [2024-07-16 00:44:47.839903] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:08:13.429 [2024-07-16 00:44:47.839991] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.429 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.429 [2024-07-16 00:44:47.908099] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.429 [2024-07-16 00:44:48.028532] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.429 [2024-07-16 00:44:48.028594] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.429 [2024-07-16 00:44:48.028610] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.429 [2024-07-16 00:44:48.028624] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.429 [2024-07-16 00:44:48.028635] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.429 [2024-07-16 00:44:48.028722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.429 [2024-07-16 00:44:48.028777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.429 [2024-07-16 00:44:48.028781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:14.364 00:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:14.364 [2024-07-16 00:44:49.067265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.364 00:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.933 00:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.933 [2024-07-16 00:44:49.642212] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.933 00:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.191 00:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:15.449 Malloc0 00:08:15.449 00:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:15.707 Delay0 00:08:15.707 00:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.965 00:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:16.223 NULL1 00:08:16.223 00:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:16.481 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2564409 00:08:16.481 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:16.481 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:16.481 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.481 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.738 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.996 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:16.996 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:17.254 true 00:08:17.254 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:17.254 00:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.512 00:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.770 00:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:17.770 00:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:18.029 true 00:08:18.029 00:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:18.029 00:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.965 Read completed with error (sct=0, sc=11) 00:08:18.965 00:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.224 00:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:19.224 00:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:19.523 true 00:08:19.523 00:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:19.523 00:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.804 00:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.061 00:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:20.061 00:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:20.061 true 00:08:20.061 00:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:20.061 00:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.437 00:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.437 00:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:21.437 00:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:21.694 true 00:08:21.694 00:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:21.694 00:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.951 00:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.208 00:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:22.208 00:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:22.465 true 00:08:22.465 00:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:22.465 00:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.402 00:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.402 00:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:23.402 00:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:23.660 true 00:08:23.660 00:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:23.660 00:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.918 00:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.175 00:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:24.175 00:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:24.433 true 00:08:24.433 00:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:24.433 00:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.373 00:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.630 00:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:25.630 00:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:25.888 true 00:08:25.888 00:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:25.888 00:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.146 00:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.404 00:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:26.404 00:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:26.662 true 00:08:26.662 00:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:26.662 00:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.594 00:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.851 00:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:27.851 00:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:28.110 true 00:08:28.110 00:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:28.110 00:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.368 00:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.625 00:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:28.625 00:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:28.884 true 00:08:28.884 00:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:28.884 00:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.823 00:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.082 00:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:30.082 00:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:30.340 true 00:08:30.340 00:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:30.340 00:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.599 00:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.858 00:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:30.858 00:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:30.858 true 00:08:30.858 00:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:30.858 00:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.236 00:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.236 00:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:32.236 00:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:32.494 true 00:08:32.494 00:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:32.494 00:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.752 00:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.009 00:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:33.009 00:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:33.267 true 00:08:33.267 00:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:33.267 00:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.559 00:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.817 00:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:33.817 00:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:34.074 true 00:08:34.074 00:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:34.074 00:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.008 00:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.265 00:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:35.265 00:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:35.523 true 00:08:35.523 00:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:35.523 00:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.780 00:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.037 00:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:36.037 00:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:36.294 true 00:08:36.294 00:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:36.294 00:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.230 00:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.230 00:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:37.231 00:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:37.488 true 00:08:37.488 00:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:37.488 00:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.745 00:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.002 00:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:38.002 00:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:38.259 true 00:08:38.259 00:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:38.259 00:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.190 00:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.447 00:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:39.447 00:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:39.704 true 00:08:39.704 00:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:39.704 00:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.961 00:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.218 00:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:40.218 00:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:40.474 true 00:08:40.474 00:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:40.475 00:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.408 00:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.665 00:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:41.665 00:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:41.923 true 00:08:41.923 00:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:41.923 00:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.180 00:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.438 00:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:42.438 00:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:42.696 true 00:08:42.696 00:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:42.696 00:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.633 00:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.633 00:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:43.633 00:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:43.891 true 00:08:43.891 00:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:43.891 00:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.148 00:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.407 00:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:44.407 00:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:44.664 true 00:08:44.664 00:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:44.664 00:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.599 00:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.857 00:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:45.857 00:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:46.116 true 00:08:46.116 00:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:46.116 00:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.374 00:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.631 00:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:46.631 00:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:46.894 true 00:08:46.894 00:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:46.894 00:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.894 Initializing NVMe Controllers 00:08:46.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:46.894 Controller IO queue size 128, less than required. 00:08:46.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.894 Controller IO queue size 128, less than required. 00:08:46.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:46.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:46.894 Initialization complete. Launching workers. 00:08:46.894 ======================================================== 00:08:46.894 Latency(us) 00:08:46.894 Device Information : IOPS MiB/s Average min max 00:08:46.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 727.16 0.36 85510.78 2456.36 1094982.92 00:08:46.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10291.58 5.03 12401.21 2831.82 454386.89 00:08:46.894 ======================================================== 00:08:46.894 Total : 11018.74 5.38 17225.93 2456.36 1094982.92 00:08:46.894 00:08:47.220 00:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.477 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:47.477 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:47.734 true 00:08:47.734 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2564409 00:08:47.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2564409) - No such process 00:08:47.734 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2564409 00:08:47.734 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.991 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.248 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:48.248 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:48.248 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:48.248 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:48.248 00:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:48.505 null0 00:08:48.505 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:48.505 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:48.505 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:48.764 null1 00:08:48.764 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:48.764 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:48.764 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:48.764 null2 00:08:49.022 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:49.022 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:49.022 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:49.022 null3 00:08:49.022 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:49.022 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:49.022 00:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:49.588 null4 00:08:49.588 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:49.588 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:49.588 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:49.847 null5 00:08:49.847 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:49.847 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:49.847 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:49.847 null6 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:50.106 null7 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2569102 2569103 2569104 2569107 2569109 2569111 2569113 2569115 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.106 00:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:50.364 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:50.621 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.621 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:50.621 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:50.621 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.621 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:50.621 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:50.621 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.878 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:51.135 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.392 00:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.649 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.906 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.162 00:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:52.418 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.418 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.418 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:52.418 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.418 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.419 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:52.676 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.933 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.934 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.191 00:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.448 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.449 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.706 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.964 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.222 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.222 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.222 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.222 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.222 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.223 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.223 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.223 00:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.481 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.739 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.998 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.999 00:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.257 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.257 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.515 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.515 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.515 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.515 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.515 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.515 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.773 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.774 rmmod nvme_tcp 00:08:55.774 rmmod nvme_fabrics 00:08:55.774 rmmod nvme_keyring 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2563975 ']' 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2563975 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2563975 ']' 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2563975 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2563975 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2563975' 00:08:55.774 killing process with pid 2563975 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2563975 00:08:55.774 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2563975 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.032 00:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.566 00:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.566 00:08:58.566 real 0m47.309s 00:08:58.566 user 3m33.585s 00:08:58.566 sys 0m16.941s 00:08:58.566 00:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.566 00:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.566 ************************************ 00:08:58.566 END TEST nvmf_ns_hotplug_stress 00:08:58.566 ************************************ 00:08:58.566 00:45:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:58.566 00:45:32 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:58.566 00:45:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:58.566 00:45:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.566 00:45:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.566 ************************************ 00:08:58.566 START TEST nvmf_connect_stress 00:08:58.566 ************************************ 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:58.566 * Looking for test storage... 00:08:58.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.566 00:45:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:59.939 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:59.939 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:59.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:59.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:59.939 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.940 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:09:00.212 00:09:00.212 --- 10.0.0.2 ping statistics --- 00:09:00.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.212 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:09:00.212 00:09:00.212 --- 10.0.0.1 ping statistics --- 00:09:00.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.212 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2571865 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2571865 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2571865 ']' 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.212 00:45:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.212 [2024-07-16 00:45:34.878959] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:09:00.212 [2024-07-16 00:45:34.879035] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.212 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.212 [2024-07-16 00:45:34.940851] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.470 [2024-07-16 00:45:35.051721] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.470 [2024-07-16 00:45:35.051777] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.470 [2024-07-16 00:45:35.051791] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.470 [2024-07-16 00:45:35.051803] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.470 [2024-07-16 00:45:35.051813] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.470 [2024-07-16 00:45:35.051949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.470 [2024-07-16 00:45:35.052001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.470 [2024-07-16 00:45:35.052005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.470 [2024-07-16 00:45:35.198323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.470 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.731 [2024-07-16 00:45:35.229092] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.731 NULL1 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2572004 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.731 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.004 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.004 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:01.004 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.004 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.004 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.272 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.272 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:01.272 00:45:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.272 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.272 00:45:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.528 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.528 00:45:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:01.528 00:45:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.528 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.528 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.137 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.137 00:45:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:02.137 00:45:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.137 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.137 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.394 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.394 00:45:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:02.394 00:45:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.394 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.394 00:45:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.650 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.650 00:45:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:02.650 00:45:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.650 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.650 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.907 00:45:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:02.907 00:45:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.907 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.907 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.163 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.163 00:45:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:03.163 00:45:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.163 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.163 00:45:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.725 00:45:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:03.725 00:45:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.725 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.725 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.981 00:45:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:03.981 00:45:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.982 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.982 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.238 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.238 00:45:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:04.238 00:45:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.238 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.238 00:45:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.495 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.495 00:45:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:04.495 00:45:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.495 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.495 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.752 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.752 00:45:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:04.752 00:45:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.752 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.752 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.317 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.317 00:45:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:05.317 00:45:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.317 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.317 00:45:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.574 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.574 00:45:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:05.574 00:45:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.574 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.574 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.831 00:45:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:05.831 00:45:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.831 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.831 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.088 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.088 00:45:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:06.088 00:45:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.088 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.088 00:45:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.345 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.345 00:45:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:06.345 00:45:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.345 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.345 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.910 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.910 00:45:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:06.910 00:45:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.910 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.910 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.167 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.167 00:45:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:07.167 00:45:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.167 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.167 00:45:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.424 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.424 00:45:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:07.424 00:45:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.424 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.424 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.680 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.680 00:45:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:07.680 00:45:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.680 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.680 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.936 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.936 00:45:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:07.936 00:45:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.936 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.936 00:45:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.499 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.500 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:08.500 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.500 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.500 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.757 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.757 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:08.757 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.757 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.757 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.013 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.013 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:09.013 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.013 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.013 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.270 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.270 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:09.270 00:45:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.270 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.270 00:45:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.835 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.835 00:45:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:09.835 00:45:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.835 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.835 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.093 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.093 00:45:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:10.093 00:45:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.093 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.093 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.350 00:45:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:10.350 00:45:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.350 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.350 00:45:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.606 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.606 00:45:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:10.606 00:45:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.606 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.606 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.606 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2572004 00:09:10.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2572004) - No such process 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2572004 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.863 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.863 rmmod nvme_tcp 00:09:10.863 rmmod nvme_fabrics 00:09:10.863 rmmod nvme_keyring 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2571865 ']' 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2571865 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2571865 ']' 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2571865 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2571865 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2571865' 00:09:11.120 killing process with pid 2571865 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2571865 00:09:11.120 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2571865 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.377 00:45:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.281 00:45:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.281 00:09:13.281 real 0m15.196s 00:09:13.281 user 0m38.347s 00:09:13.281 sys 0m5.722s 00:09:13.281 00:45:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.281 00:45:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.281 ************************************ 00:09:13.281 END TEST nvmf_connect_stress 00:09:13.281 ************************************ 00:09:13.281 00:45:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.281 00:45:48 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:13.281 00:45:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.281 00:45:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.281 00:45:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.540 ************************************ 00:09:13.540 START TEST nvmf_fused_ordering 00:09:13.540 ************************************ 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:13.540 * Looking for test storage... 00:09:13.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.540 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.541 00:45:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:15.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:15.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.484 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:15.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:15.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:15.485 00:45:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:09:15.485 00:09:15.485 --- 10.0.0.2 ping statistics --- 00:09:15.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.485 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:09:15.485 00:09:15.485 --- 10.0.0.1 ping statistics --- 00:09:15.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.485 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2575159 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2575159 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2575159 ']' 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.485 00:45:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.485 [2024-07-16 00:45:50.208835] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:09:15.485 [2024-07-16 00:45:50.208917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.485 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.743 [2024-07-16 00:45:50.283406] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.743 [2024-07-16 00:45:50.390993] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.743 [2024-07-16 00:45:50.391056] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.743 [2024-07-16 00:45:50.391072] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.743 [2024-07-16 00:45:50.391086] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.743 [2024-07-16 00:45:50.391098] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.743 [2024-07-16 00:45:50.391128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 [2024-07-16 00:45:51.181721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 [2024-07-16 00:45:51.197892] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 NULL1 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.676 00:45:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:16.676 [2024-07-16 00:45:51.243656] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:09:16.676 [2024-07-16 00:45:51.243699] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575308 ] 00:09:16.676 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.239 Attached to nqn.2016-06.io.spdk:cnode1 00:09:17.239 Namespace ID: 1 size: 1GB 00:09:17.239 fused_ordering(0) 00:09:17.239 fused_ordering(1) 00:09:17.239 fused_ordering(2) 00:09:17.239 fused_ordering(3) 00:09:17.239 fused_ordering(4) 00:09:17.239 fused_ordering(5) 00:09:17.239 fused_ordering(6) 00:09:17.239 fused_ordering(7) 00:09:17.239 fused_ordering(8) 00:09:17.239 fused_ordering(9) 00:09:17.239 fused_ordering(10) 00:09:17.239 fused_ordering(11) 00:09:17.239 fused_ordering(12) 00:09:17.239 fused_ordering(13) 00:09:17.239 fused_ordering(14) 00:09:17.239 fused_ordering(15) 00:09:17.239 fused_ordering(16) 00:09:17.239 fused_ordering(17) 00:09:17.239 fused_ordering(18) 00:09:17.239 fused_ordering(19) 00:09:17.239 fused_ordering(20) 00:09:17.239 fused_ordering(21) 00:09:17.239 fused_ordering(22) 00:09:17.239 fused_ordering(23) 00:09:17.239 fused_ordering(24) 00:09:17.239 fused_ordering(25) 00:09:17.239 fused_ordering(26) 00:09:17.239 fused_ordering(27) 00:09:17.239 fused_ordering(28) 00:09:17.239 fused_ordering(29) 00:09:17.239 fused_ordering(30) 00:09:17.239 fused_ordering(31) 00:09:17.239 fused_ordering(32) 00:09:17.239 fused_ordering(33) 00:09:17.239 fused_ordering(34) 00:09:17.239 fused_ordering(35) 00:09:17.239 fused_ordering(36) 00:09:17.239 fused_ordering(37) 00:09:17.239 fused_ordering(38) 00:09:17.239 fused_ordering(39) 00:09:17.239 fused_ordering(40) 00:09:17.239 fused_ordering(41) 00:09:17.239 fused_ordering(42) 00:09:17.239 fused_ordering(43) 00:09:17.239 fused_ordering(44) 00:09:17.239 fused_ordering(45) 00:09:17.239 fused_ordering(46) 00:09:17.239 fused_ordering(47) 00:09:17.239 fused_ordering(48) 00:09:17.239 fused_ordering(49) 00:09:17.239 fused_ordering(50) 00:09:17.239 fused_ordering(51) 00:09:17.239 fused_ordering(52) 00:09:17.239 fused_ordering(53) 00:09:17.239 fused_ordering(54) 00:09:17.239 fused_ordering(55) 00:09:17.239 fused_ordering(56) 00:09:17.239 fused_ordering(57) 00:09:17.239 fused_ordering(58) 00:09:17.239 fused_ordering(59) 00:09:17.239 fused_ordering(60) 00:09:17.239 fused_ordering(61) 00:09:17.239 fused_ordering(62) 00:09:17.239 fused_ordering(63) 00:09:17.239 fused_ordering(64) 00:09:17.239 fused_ordering(65) 00:09:17.239 fused_ordering(66) 00:09:17.239 fused_ordering(67) 00:09:17.239 fused_ordering(68) 00:09:17.239 fused_ordering(69) 00:09:17.239 fused_ordering(70) 00:09:17.239 fused_ordering(71) 00:09:17.239 fused_ordering(72) 00:09:17.239 fused_ordering(73) 00:09:17.239 fused_ordering(74) 00:09:17.239 fused_ordering(75) 00:09:17.239 fused_ordering(76) 00:09:17.239 fused_ordering(77) 00:09:17.239 fused_ordering(78) 00:09:17.239 fused_ordering(79) 00:09:17.239 fused_ordering(80) 00:09:17.239 fused_ordering(81) 00:09:17.239 fused_ordering(82) 00:09:17.239 fused_ordering(83) 00:09:17.239 fused_ordering(84) 00:09:17.239 fused_ordering(85) 00:09:17.239 fused_ordering(86) 00:09:17.239 fused_ordering(87) 00:09:17.239 fused_ordering(88) 00:09:17.239 fused_ordering(89) 00:09:17.239 fused_ordering(90) 00:09:17.239 fused_ordering(91) 00:09:17.239 fused_ordering(92) 00:09:17.239 fused_ordering(93) 00:09:17.239 fused_ordering(94) 00:09:17.239 fused_ordering(95) 00:09:17.239 fused_ordering(96) 00:09:17.239 fused_ordering(97) 00:09:17.239 fused_ordering(98) 00:09:17.239 fused_ordering(99) 00:09:17.239 fused_ordering(100) 00:09:17.239 fused_ordering(101) 00:09:17.239 fused_ordering(102) 00:09:17.239 fused_ordering(103) 00:09:17.239 fused_ordering(104) 00:09:17.239 fused_ordering(105) 00:09:17.239 fused_ordering(106) 00:09:17.239 fused_ordering(107) 00:09:17.239 fused_ordering(108) 00:09:17.239 fused_ordering(109) 00:09:17.239 fused_ordering(110) 00:09:17.239 fused_ordering(111) 00:09:17.239 fused_ordering(112) 00:09:17.239 fused_ordering(113) 00:09:17.239 fused_ordering(114) 00:09:17.239 fused_ordering(115) 00:09:17.239 fused_ordering(116) 00:09:17.239 fused_ordering(117) 00:09:17.239 fused_ordering(118) 00:09:17.239 fused_ordering(119) 00:09:17.239 fused_ordering(120) 00:09:17.239 fused_ordering(121) 00:09:17.239 fused_ordering(122) 00:09:17.239 fused_ordering(123) 00:09:17.239 fused_ordering(124) 00:09:17.239 fused_ordering(125) 00:09:17.239 fused_ordering(126) 00:09:17.239 fused_ordering(127) 00:09:17.239 fused_ordering(128) 00:09:17.239 fused_ordering(129) 00:09:17.239 fused_ordering(130) 00:09:17.239 fused_ordering(131) 00:09:17.239 fused_ordering(132) 00:09:17.239 fused_ordering(133) 00:09:17.239 fused_ordering(134) 00:09:17.239 fused_ordering(135) 00:09:17.239 fused_ordering(136) 00:09:17.239 fused_ordering(137) 00:09:17.239 fused_ordering(138) 00:09:17.239 fused_ordering(139) 00:09:17.239 fused_ordering(140) 00:09:17.239 fused_ordering(141) 00:09:17.239 fused_ordering(142) 00:09:17.239 fused_ordering(143) 00:09:17.239 fused_ordering(144) 00:09:17.239 fused_ordering(145) 00:09:17.239 fused_ordering(146) 00:09:17.239 fused_ordering(147) 00:09:17.239 fused_ordering(148) 00:09:17.239 fused_ordering(149) 00:09:17.239 fused_ordering(150) 00:09:17.239 fused_ordering(151) 00:09:17.239 fused_ordering(152) 00:09:17.239 fused_ordering(153) 00:09:17.239 fused_ordering(154) 00:09:17.239 fused_ordering(155) 00:09:17.239 fused_ordering(156) 00:09:17.239 fused_ordering(157) 00:09:17.239 fused_ordering(158) 00:09:17.239 fused_ordering(159) 00:09:17.239 fused_ordering(160) 00:09:17.239 fused_ordering(161) 00:09:17.239 fused_ordering(162) 00:09:17.239 fused_ordering(163) 00:09:17.239 fused_ordering(164) 00:09:17.239 fused_ordering(165) 00:09:17.239 fused_ordering(166) 00:09:17.239 fused_ordering(167) 00:09:17.239 fused_ordering(168) 00:09:17.239 fused_ordering(169) 00:09:17.239 fused_ordering(170) 00:09:17.239 fused_ordering(171) 00:09:17.239 fused_ordering(172) 00:09:17.239 fused_ordering(173) 00:09:17.239 fused_ordering(174) 00:09:17.239 fused_ordering(175) 00:09:17.239 fused_ordering(176) 00:09:17.239 fused_ordering(177) 00:09:17.239 fused_ordering(178) 00:09:17.239 fused_ordering(179) 00:09:17.239 fused_ordering(180) 00:09:17.239 fused_ordering(181) 00:09:17.239 fused_ordering(182) 00:09:17.239 fused_ordering(183) 00:09:17.239 fused_ordering(184) 00:09:17.239 fused_ordering(185) 00:09:17.239 fused_ordering(186) 00:09:17.239 fused_ordering(187) 00:09:17.239 fused_ordering(188) 00:09:17.239 fused_ordering(189) 00:09:17.239 fused_ordering(190) 00:09:17.239 fused_ordering(191) 00:09:17.239 fused_ordering(192) 00:09:17.239 fused_ordering(193) 00:09:17.239 fused_ordering(194) 00:09:17.239 fused_ordering(195) 00:09:17.239 fused_ordering(196) 00:09:17.239 fused_ordering(197) 00:09:17.239 fused_ordering(198) 00:09:17.239 fused_ordering(199) 00:09:17.239 fused_ordering(200) 00:09:17.239 fused_ordering(201) 00:09:17.239 fused_ordering(202) 00:09:17.239 fused_ordering(203) 00:09:17.239 fused_ordering(204) 00:09:17.239 fused_ordering(205) 00:09:17.803 fused_ordering(206) 00:09:17.803 fused_ordering(207) 00:09:17.803 fused_ordering(208) 00:09:17.803 fused_ordering(209) 00:09:17.803 fused_ordering(210) 00:09:17.803 fused_ordering(211) 00:09:17.803 fused_ordering(212) 00:09:17.803 fused_ordering(213) 00:09:17.803 fused_ordering(214) 00:09:17.803 fused_ordering(215) 00:09:17.803 fused_ordering(216) 00:09:17.803 fused_ordering(217) 00:09:17.803 fused_ordering(218) 00:09:17.803 fused_ordering(219) 00:09:17.803 fused_ordering(220) 00:09:17.803 fused_ordering(221) 00:09:17.803 fused_ordering(222) 00:09:17.803 fused_ordering(223) 00:09:17.803 fused_ordering(224) 00:09:17.803 fused_ordering(225) 00:09:17.803 fused_ordering(226) 00:09:17.803 fused_ordering(227) 00:09:17.803 fused_ordering(228) 00:09:17.803 fused_ordering(229) 00:09:17.803 fused_ordering(230) 00:09:17.803 fused_ordering(231) 00:09:17.803 fused_ordering(232) 00:09:17.803 fused_ordering(233) 00:09:17.803 fused_ordering(234) 00:09:17.803 fused_ordering(235) 00:09:17.803 fused_ordering(236) 00:09:17.803 fused_ordering(237) 00:09:17.803 fused_ordering(238) 00:09:17.803 fused_ordering(239) 00:09:17.803 fused_ordering(240) 00:09:17.803 fused_ordering(241) 00:09:17.803 fused_ordering(242) 00:09:17.803 fused_ordering(243) 00:09:17.803 fused_ordering(244) 00:09:17.803 fused_ordering(245) 00:09:17.803 fused_ordering(246) 00:09:17.803 fused_ordering(247) 00:09:17.803 fused_ordering(248) 00:09:17.803 fused_ordering(249) 00:09:17.803 fused_ordering(250) 00:09:17.803 fused_ordering(251) 00:09:17.803 fused_ordering(252) 00:09:17.803 fused_ordering(253) 00:09:17.803 fused_ordering(254) 00:09:17.803 fused_ordering(255) 00:09:17.803 fused_ordering(256) 00:09:17.803 fused_ordering(257) 00:09:17.803 fused_ordering(258) 00:09:17.803 fused_ordering(259) 00:09:17.803 fused_ordering(260) 00:09:17.803 fused_ordering(261) 00:09:17.803 fused_ordering(262) 00:09:17.803 fused_ordering(263) 00:09:17.803 fused_ordering(264) 00:09:17.803 fused_ordering(265) 00:09:17.803 fused_ordering(266) 00:09:17.803 fused_ordering(267) 00:09:17.803 fused_ordering(268) 00:09:17.803 fused_ordering(269) 00:09:17.803 fused_ordering(270) 00:09:17.803 fused_ordering(271) 00:09:17.803 fused_ordering(272) 00:09:17.803 fused_ordering(273) 00:09:17.803 fused_ordering(274) 00:09:17.803 fused_ordering(275) 00:09:17.803 fused_ordering(276) 00:09:17.803 fused_ordering(277) 00:09:17.803 fused_ordering(278) 00:09:17.803 fused_ordering(279) 00:09:17.803 fused_ordering(280) 00:09:17.803 fused_ordering(281) 00:09:17.803 fused_ordering(282) 00:09:17.803 fused_ordering(283) 00:09:17.803 fused_ordering(284) 00:09:17.803 fused_ordering(285) 00:09:17.803 fused_ordering(286) 00:09:17.803 fused_ordering(287) 00:09:17.803 fused_ordering(288) 00:09:17.803 fused_ordering(289) 00:09:17.803 fused_ordering(290) 00:09:17.803 fused_ordering(291) 00:09:17.803 fused_ordering(292) 00:09:17.803 fused_ordering(293) 00:09:17.803 fused_ordering(294) 00:09:17.803 fused_ordering(295) 00:09:17.803 fused_ordering(296) 00:09:17.803 fused_ordering(297) 00:09:17.803 fused_ordering(298) 00:09:17.803 fused_ordering(299) 00:09:17.803 fused_ordering(300) 00:09:17.803 fused_ordering(301) 00:09:17.803 fused_ordering(302) 00:09:17.803 fused_ordering(303) 00:09:17.803 fused_ordering(304) 00:09:17.803 fused_ordering(305) 00:09:17.803 fused_ordering(306) 00:09:17.803 fused_ordering(307) 00:09:17.803 fused_ordering(308) 00:09:17.803 fused_ordering(309) 00:09:17.803 fused_ordering(310) 00:09:17.803 fused_ordering(311) 00:09:17.803 fused_ordering(312) 00:09:17.803 fused_ordering(313) 00:09:17.803 fused_ordering(314) 00:09:17.803 fused_ordering(315) 00:09:17.803 fused_ordering(316) 00:09:17.803 fused_ordering(317) 00:09:17.803 fused_ordering(318) 00:09:17.803 fused_ordering(319) 00:09:17.803 fused_ordering(320) 00:09:17.803 fused_ordering(321) 00:09:17.803 fused_ordering(322) 00:09:17.803 fused_ordering(323) 00:09:17.803 fused_ordering(324) 00:09:17.803 fused_ordering(325) 00:09:17.803 fused_ordering(326) 00:09:17.803 fused_ordering(327) 00:09:17.803 fused_ordering(328) 00:09:17.803 fused_ordering(329) 00:09:17.803 fused_ordering(330) 00:09:17.803 fused_ordering(331) 00:09:17.803 fused_ordering(332) 00:09:17.803 fused_ordering(333) 00:09:17.803 fused_ordering(334) 00:09:17.803 fused_ordering(335) 00:09:17.803 fused_ordering(336) 00:09:17.803 fused_ordering(337) 00:09:17.803 fused_ordering(338) 00:09:17.803 fused_ordering(339) 00:09:17.803 fused_ordering(340) 00:09:17.803 fused_ordering(341) 00:09:17.803 fused_ordering(342) 00:09:17.803 fused_ordering(343) 00:09:17.803 fused_ordering(344) 00:09:17.803 fused_ordering(345) 00:09:17.803 fused_ordering(346) 00:09:17.803 fused_ordering(347) 00:09:17.803 fused_ordering(348) 00:09:17.803 fused_ordering(349) 00:09:17.803 fused_ordering(350) 00:09:17.803 fused_ordering(351) 00:09:17.803 fused_ordering(352) 00:09:17.803 fused_ordering(353) 00:09:17.803 fused_ordering(354) 00:09:17.803 fused_ordering(355) 00:09:17.803 fused_ordering(356) 00:09:17.803 fused_ordering(357) 00:09:17.803 fused_ordering(358) 00:09:17.803 fused_ordering(359) 00:09:17.803 fused_ordering(360) 00:09:17.803 fused_ordering(361) 00:09:17.803 fused_ordering(362) 00:09:17.803 fused_ordering(363) 00:09:17.803 fused_ordering(364) 00:09:17.803 fused_ordering(365) 00:09:17.803 fused_ordering(366) 00:09:17.804 fused_ordering(367) 00:09:17.804 fused_ordering(368) 00:09:17.804 fused_ordering(369) 00:09:17.804 fused_ordering(370) 00:09:17.804 fused_ordering(371) 00:09:17.804 fused_ordering(372) 00:09:17.804 fused_ordering(373) 00:09:17.804 fused_ordering(374) 00:09:17.804 fused_ordering(375) 00:09:17.804 fused_ordering(376) 00:09:17.804 fused_ordering(377) 00:09:17.804 fused_ordering(378) 00:09:17.804 fused_ordering(379) 00:09:17.804 fused_ordering(380) 00:09:17.804 fused_ordering(381) 00:09:17.804 fused_ordering(382) 00:09:17.804 fused_ordering(383) 00:09:17.804 fused_ordering(384) 00:09:17.804 fused_ordering(385) 00:09:17.804 fused_ordering(386) 00:09:17.804 fused_ordering(387) 00:09:17.804 fused_ordering(388) 00:09:17.804 fused_ordering(389) 00:09:17.804 fused_ordering(390) 00:09:17.804 fused_ordering(391) 00:09:17.804 fused_ordering(392) 00:09:17.804 fused_ordering(393) 00:09:17.804 fused_ordering(394) 00:09:17.804 fused_ordering(395) 00:09:17.804 fused_ordering(396) 00:09:17.804 fused_ordering(397) 00:09:17.804 fused_ordering(398) 00:09:17.804 fused_ordering(399) 00:09:17.804 fused_ordering(400) 00:09:17.804 fused_ordering(401) 00:09:17.804 fused_ordering(402) 00:09:17.804 fused_ordering(403) 00:09:17.804 fused_ordering(404) 00:09:17.804 fused_ordering(405) 00:09:17.804 fused_ordering(406) 00:09:17.804 fused_ordering(407) 00:09:17.804 fused_ordering(408) 00:09:17.804 fused_ordering(409) 00:09:17.804 fused_ordering(410) 00:09:18.734 fused_ordering(411) 00:09:18.734 fused_ordering(412) 00:09:18.734 fused_ordering(413) 00:09:18.734 fused_ordering(414) 00:09:18.734 fused_ordering(415) 00:09:18.734 fused_ordering(416) 00:09:18.734 fused_ordering(417) 00:09:18.734 fused_ordering(418) 00:09:18.734 fused_ordering(419) 00:09:18.734 fused_ordering(420) 00:09:18.734 fused_ordering(421) 00:09:18.734 fused_ordering(422) 00:09:18.734 fused_ordering(423) 00:09:18.734 fused_ordering(424) 00:09:18.734 fused_ordering(425) 00:09:18.734 fused_ordering(426) 00:09:18.734 fused_ordering(427) 00:09:18.734 fused_ordering(428) 00:09:18.734 fused_ordering(429) 00:09:18.734 fused_ordering(430) 00:09:18.734 fused_ordering(431) 00:09:18.734 fused_ordering(432) 00:09:18.734 fused_ordering(433) 00:09:18.734 fused_ordering(434) 00:09:18.734 fused_ordering(435) 00:09:18.734 fused_ordering(436) 00:09:18.734 fused_ordering(437) 00:09:18.734 fused_ordering(438) 00:09:18.734 fused_ordering(439) 00:09:18.734 fused_ordering(440) 00:09:18.734 fused_ordering(441) 00:09:18.734 fused_ordering(442) 00:09:18.734 fused_ordering(443) 00:09:18.734 fused_ordering(444) 00:09:18.734 fused_ordering(445) 00:09:18.734 fused_ordering(446) 00:09:18.734 fused_ordering(447) 00:09:18.734 fused_ordering(448) 00:09:18.734 fused_ordering(449) 00:09:18.734 fused_ordering(450) 00:09:18.734 fused_ordering(451) 00:09:18.734 fused_ordering(452) 00:09:18.734 fused_ordering(453) 00:09:18.734 fused_ordering(454) 00:09:18.734 fused_ordering(455) 00:09:18.734 fused_ordering(456) 00:09:18.734 fused_ordering(457) 00:09:18.734 fused_ordering(458) 00:09:18.734 fused_ordering(459) 00:09:18.734 fused_ordering(460) 00:09:18.734 fused_ordering(461) 00:09:18.734 fused_ordering(462) 00:09:18.734 fused_ordering(463) 00:09:18.734 fused_ordering(464) 00:09:18.734 fused_ordering(465) 00:09:18.734 fused_ordering(466) 00:09:18.734 fused_ordering(467) 00:09:18.734 fused_ordering(468) 00:09:18.734 fused_ordering(469) 00:09:18.734 fused_ordering(470) 00:09:18.734 fused_ordering(471) 00:09:18.734 fused_ordering(472) 00:09:18.734 fused_ordering(473) 00:09:18.734 fused_ordering(474) 00:09:18.734 fused_ordering(475) 00:09:18.734 fused_ordering(476) 00:09:18.734 fused_ordering(477) 00:09:18.734 fused_ordering(478) 00:09:18.734 fused_ordering(479) 00:09:18.734 fused_ordering(480) 00:09:18.734 fused_ordering(481) 00:09:18.734 fused_ordering(482) 00:09:18.734 fused_ordering(483) 00:09:18.734 fused_ordering(484) 00:09:18.734 fused_ordering(485) 00:09:18.734 fused_ordering(486) 00:09:18.734 fused_ordering(487) 00:09:18.734 fused_ordering(488) 00:09:18.734 fused_ordering(489) 00:09:18.734 fused_ordering(490) 00:09:18.734 fused_ordering(491) 00:09:18.734 fused_ordering(492) 00:09:18.734 fused_ordering(493) 00:09:18.734 fused_ordering(494) 00:09:18.734 fused_ordering(495) 00:09:18.734 fused_ordering(496) 00:09:18.734 fused_ordering(497) 00:09:18.734 fused_ordering(498) 00:09:18.734 fused_ordering(499) 00:09:18.734 fused_ordering(500) 00:09:18.734 fused_ordering(501) 00:09:18.734 fused_ordering(502) 00:09:18.734 fused_ordering(503) 00:09:18.734 fused_ordering(504) 00:09:18.734 fused_ordering(505) 00:09:18.734 fused_ordering(506) 00:09:18.734 fused_ordering(507) 00:09:18.734 fused_ordering(508) 00:09:18.734 fused_ordering(509) 00:09:18.734 fused_ordering(510) 00:09:18.734 fused_ordering(511) 00:09:18.734 fused_ordering(512) 00:09:18.734 fused_ordering(513) 00:09:18.734 fused_ordering(514) 00:09:18.734 fused_ordering(515) 00:09:18.734 fused_ordering(516) 00:09:18.734 fused_ordering(517) 00:09:18.734 fused_ordering(518) 00:09:18.734 fused_ordering(519) 00:09:18.734 fused_ordering(520) 00:09:18.734 fused_ordering(521) 00:09:18.734 fused_ordering(522) 00:09:18.734 fused_ordering(523) 00:09:18.734 fused_ordering(524) 00:09:18.734 fused_ordering(525) 00:09:18.734 fused_ordering(526) 00:09:18.734 fused_ordering(527) 00:09:18.734 fused_ordering(528) 00:09:18.734 fused_ordering(529) 00:09:18.734 fused_ordering(530) 00:09:18.734 fused_ordering(531) 00:09:18.734 fused_ordering(532) 00:09:18.734 fused_ordering(533) 00:09:18.734 fused_ordering(534) 00:09:18.734 fused_ordering(535) 00:09:18.734 fused_ordering(536) 00:09:18.734 fused_ordering(537) 00:09:18.734 fused_ordering(538) 00:09:18.734 fused_ordering(539) 00:09:18.734 fused_ordering(540) 00:09:18.734 fused_ordering(541) 00:09:18.734 fused_ordering(542) 00:09:18.734 fused_ordering(543) 00:09:18.734 fused_ordering(544) 00:09:18.734 fused_ordering(545) 00:09:18.734 fused_ordering(546) 00:09:18.734 fused_ordering(547) 00:09:18.734 fused_ordering(548) 00:09:18.734 fused_ordering(549) 00:09:18.734 fused_ordering(550) 00:09:18.734 fused_ordering(551) 00:09:18.734 fused_ordering(552) 00:09:18.734 fused_ordering(553) 00:09:18.734 fused_ordering(554) 00:09:18.734 fused_ordering(555) 00:09:18.734 fused_ordering(556) 00:09:18.734 fused_ordering(557) 00:09:18.734 fused_ordering(558) 00:09:18.734 fused_ordering(559) 00:09:18.734 fused_ordering(560) 00:09:18.734 fused_ordering(561) 00:09:18.734 fused_ordering(562) 00:09:18.734 fused_ordering(563) 00:09:18.734 fused_ordering(564) 00:09:18.734 fused_ordering(565) 00:09:18.734 fused_ordering(566) 00:09:18.734 fused_ordering(567) 00:09:18.734 fused_ordering(568) 00:09:18.734 fused_ordering(569) 00:09:18.734 fused_ordering(570) 00:09:18.734 fused_ordering(571) 00:09:18.734 fused_ordering(572) 00:09:18.734 fused_ordering(573) 00:09:18.734 fused_ordering(574) 00:09:18.734 fused_ordering(575) 00:09:18.734 fused_ordering(576) 00:09:18.734 fused_ordering(577) 00:09:18.734 fused_ordering(578) 00:09:18.734 fused_ordering(579) 00:09:18.734 fused_ordering(580) 00:09:18.734 fused_ordering(581) 00:09:18.734 fused_ordering(582) 00:09:18.734 fused_ordering(583) 00:09:18.734 fused_ordering(584) 00:09:18.734 fused_ordering(585) 00:09:18.734 fused_ordering(586) 00:09:18.734 fused_ordering(587) 00:09:18.734 fused_ordering(588) 00:09:18.734 fused_ordering(589) 00:09:18.734 fused_ordering(590) 00:09:18.734 fused_ordering(591) 00:09:18.734 fused_ordering(592) 00:09:18.734 fused_ordering(593) 00:09:18.734 fused_ordering(594) 00:09:18.734 fused_ordering(595) 00:09:18.734 fused_ordering(596) 00:09:18.734 fused_ordering(597) 00:09:18.734 fused_ordering(598) 00:09:18.734 fused_ordering(599) 00:09:18.734 fused_ordering(600) 00:09:18.734 fused_ordering(601) 00:09:18.734 fused_ordering(602) 00:09:18.734 fused_ordering(603) 00:09:18.734 fused_ordering(604) 00:09:18.734 fused_ordering(605) 00:09:18.734 fused_ordering(606) 00:09:18.734 fused_ordering(607) 00:09:18.734 fused_ordering(608) 00:09:18.734 fused_ordering(609) 00:09:18.734 fused_ordering(610) 00:09:18.734 fused_ordering(611) 00:09:18.734 fused_ordering(612) 00:09:18.734 fused_ordering(613) 00:09:18.734 fused_ordering(614) 00:09:18.734 fused_ordering(615) 00:09:19.298 fused_ordering(616) 00:09:19.298 fused_ordering(617) 00:09:19.298 fused_ordering(618) 00:09:19.298 fused_ordering(619) 00:09:19.298 fused_ordering(620) 00:09:19.298 fused_ordering(621) 00:09:19.298 fused_ordering(622) 00:09:19.298 fused_ordering(623) 00:09:19.298 fused_ordering(624) 00:09:19.298 fused_ordering(625) 00:09:19.298 fused_ordering(626) 00:09:19.298 fused_ordering(627) 00:09:19.298 fused_ordering(628) 00:09:19.298 fused_ordering(629) 00:09:19.298 fused_ordering(630) 00:09:19.298 fused_ordering(631) 00:09:19.298 fused_ordering(632) 00:09:19.298 fused_ordering(633) 00:09:19.298 fused_ordering(634) 00:09:19.298 fused_ordering(635) 00:09:19.298 fused_ordering(636) 00:09:19.298 fused_ordering(637) 00:09:19.298 fused_ordering(638) 00:09:19.298 fused_ordering(639) 00:09:19.298 fused_ordering(640) 00:09:19.298 fused_ordering(641) 00:09:19.298 fused_ordering(642) 00:09:19.298 fused_ordering(643) 00:09:19.298 fused_ordering(644) 00:09:19.298 fused_ordering(645) 00:09:19.298 fused_ordering(646) 00:09:19.298 fused_ordering(647) 00:09:19.298 fused_ordering(648) 00:09:19.298 fused_ordering(649) 00:09:19.298 fused_ordering(650) 00:09:19.298 fused_ordering(651) 00:09:19.298 fused_ordering(652) 00:09:19.298 fused_ordering(653) 00:09:19.298 fused_ordering(654) 00:09:19.298 fused_ordering(655) 00:09:19.298 fused_ordering(656) 00:09:19.298 fused_ordering(657) 00:09:19.298 fused_ordering(658) 00:09:19.298 fused_ordering(659) 00:09:19.298 fused_ordering(660) 00:09:19.298 fused_ordering(661) 00:09:19.298 fused_ordering(662) 00:09:19.298 fused_ordering(663) 00:09:19.298 fused_ordering(664) 00:09:19.298 fused_ordering(665) 00:09:19.298 fused_ordering(666) 00:09:19.298 fused_ordering(667) 00:09:19.298 fused_ordering(668) 00:09:19.298 fused_ordering(669) 00:09:19.298 fused_ordering(670) 00:09:19.298 fused_ordering(671) 00:09:19.298 fused_ordering(672) 00:09:19.298 fused_ordering(673) 00:09:19.298 fused_ordering(674) 00:09:19.298 fused_ordering(675) 00:09:19.298 fused_ordering(676) 00:09:19.298 fused_ordering(677) 00:09:19.298 fused_ordering(678) 00:09:19.298 fused_ordering(679) 00:09:19.298 fused_ordering(680) 00:09:19.298 fused_ordering(681) 00:09:19.298 fused_ordering(682) 00:09:19.298 fused_ordering(683) 00:09:19.298 fused_ordering(684) 00:09:19.298 fused_ordering(685) 00:09:19.298 fused_ordering(686) 00:09:19.298 fused_ordering(687) 00:09:19.298 fused_ordering(688) 00:09:19.298 fused_ordering(689) 00:09:19.298 fused_ordering(690) 00:09:19.298 fused_ordering(691) 00:09:19.298 fused_ordering(692) 00:09:19.298 fused_ordering(693) 00:09:19.298 fused_ordering(694) 00:09:19.298 fused_ordering(695) 00:09:19.298 fused_ordering(696) 00:09:19.298 fused_ordering(697) 00:09:19.298 fused_ordering(698) 00:09:19.298 fused_ordering(699) 00:09:19.298 fused_ordering(700) 00:09:19.298 fused_ordering(701) 00:09:19.298 fused_ordering(702) 00:09:19.298 fused_ordering(703) 00:09:19.298 fused_ordering(704) 00:09:19.298 fused_ordering(705) 00:09:19.298 fused_ordering(706) 00:09:19.298 fused_ordering(707) 00:09:19.298 fused_ordering(708) 00:09:19.298 fused_ordering(709) 00:09:19.298 fused_ordering(710) 00:09:19.298 fused_ordering(711) 00:09:19.298 fused_ordering(712) 00:09:19.298 fused_ordering(713) 00:09:19.298 fused_ordering(714) 00:09:19.298 fused_ordering(715) 00:09:19.298 fused_ordering(716) 00:09:19.298 fused_ordering(717) 00:09:19.298 fused_ordering(718) 00:09:19.298 fused_ordering(719) 00:09:19.298 fused_ordering(720) 00:09:19.298 fused_ordering(721) 00:09:19.298 fused_ordering(722) 00:09:19.298 fused_ordering(723) 00:09:19.298 fused_ordering(724) 00:09:19.298 fused_ordering(725) 00:09:19.298 fused_ordering(726) 00:09:19.298 fused_ordering(727) 00:09:19.298 fused_ordering(728) 00:09:19.298 fused_ordering(729) 00:09:19.298 fused_ordering(730) 00:09:19.298 fused_ordering(731) 00:09:19.298 fused_ordering(732) 00:09:19.298 fused_ordering(733) 00:09:19.298 fused_ordering(734) 00:09:19.298 fused_ordering(735) 00:09:19.298 fused_ordering(736) 00:09:19.298 fused_ordering(737) 00:09:19.298 fused_ordering(738) 00:09:19.298 fused_ordering(739) 00:09:19.298 fused_ordering(740) 00:09:19.298 fused_ordering(741) 00:09:19.298 fused_ordering(742) 00:09:19.298 fused_ordering(743) 00:09:19.298 fused_ordering(744) 00:09:19.298 fused_ordering(745) 00:09:19.298 fused_ordering(746) 00:09:19.298 fused_ordering(747) 00:09:19.298 fused_ordering(748) 00:09:19.299 fused_ordering(749) 00:09:19.299 fused_ordering(750) 00:09:19.299 fused_ordering(751) 00:09:19.299 fused_ordering(752) 00:09:19.299 fused_ordering(753) 00:09:19.299 fused_ordering(754) 00:09:19.299 fused_ordering(755) 00:09:19.299 fused_ordering(756) 00:09:19.299 fused_ordering(757) 00:09:19.299 fused_ordering(758) 00:09:19.299 fused_ordering(759) 00:09:19.299 fused_ordering(760) 00:09:19.299 fused_ordering(761) 00:09:19.299 fused_ordering(762) 00:09:19.299 fused_ordering(763) 00:09:19.299 fused_ordering(764) 00:09:19.299 fused_ordering(765) 00:09:19.299 fused_ordering(766) 00:09:19.299 fused_ordering(767) 00:09:19.299 fused_ordering(768) 00:09:19.299 fused_ordering(769) 00:09:19.299 fused_ordering(770) 00:09:19.299 fused_ordering(771) 00:09:19.299 fused_ordering(772) 00:09:19.299 fused_ordering(773) 00:09:19.299 fused_ordering(774) 00:09:19.299 fused_ordering(775) 00:09:19.299 fused_ordering(776) 00:09:19.299 fused_ordering(777) 00:09:19.299 fused_ordering(778) 00:09:19.299 fused_ordering(779) 00:09:19.299 fused_ordering(780) 00:09:19.299 fused_ordering(781) 00:09:19.299 fused_ordering(782) 00:09:19.299 fused_ordering(783) 00:09:19.299 fused_ordering(784) 00:09:19.299 fused_ordering(785) 00:09:19.299 fused_ordering(786) 00:09:19.299 fused_ordering(787) 00:09:19.299 fused_ordering(788) 00:09:19.299 fused_ordering(789) 00:09:19.299 fused_ordering(790) 00:09:19.299 fused_ordering(791) 00:09:19.299 fused_ordering(792) 00:09:19.299 fused_ordering(793) 00:09:19.299 fused_ordering(794) 00:09:19.299 fused_ordering(795) 00:09:19.299 fused_ordering(796) 00:09:19.299 fused_ordering(797) 00:09:19.299 fused_ordering(798) 00:09:19.299 fused_ordering(799) 00:09:19.299 fused_ordering(800) 00:09:19.299 fused_ordering(801) 00:09:19.299 fused_ordering(802) 00:09:19.299 fused_ordering(803) 00:09:19.299 fused_ordering(804) 00:09:19.299 fused_ordering(805) 00:09:19.299 fused_ordering(806) 00:09:19.299 fused_ordering(807) 00:09:19.299 fused_ordering(808) 00:09:19.299 fused_ordering(809) 00:09:19.299 fused_ordering(810) 00:09:19.299 fused_ordering(811) 00:09:19.299 fused_ordering(812) 00:09:19.299 fused_ordering(813) 00:09:19.299 fused_ordering(814) 00:09:19.299 fused_ordering(815) 00:09:19.299 fused_ordering(816) 00:09:19.299 fused_ordering(817) 00:09:19.299 fused_ordering(818) 00:09:19.299 fused_ordering(819) 00:09:19.299 fused_ordering(820) 00:09:20.230 fused_ordering(821) 00:09:20.230 fused_ordering(822) 00:09:20.230 fused_ordering(823) 00:09:20.230 fused_ordering(824) 00:09:20.230 fused_ordering(825) 00:09:20.230 fused_ordering(826) 00:09:20.230 fused_ordering(827) 00:09:20.230 fused_ordering(828) 00:09:20.230 fused_ordering(829) 00:09:20.230 fused_ordering(830) 00:09:20.230 fused_ordering(831) 00:09:20.230 fused_ordering(832) 00:09:20.230 fused_ordering(833) 00:09:20.230 fused_ordering(834) 00:09:20.230 fused_ordering(835) 00:09:20.230 fused_ordering(836) 00:09:20.230 fused_ordering(837) 00:09:20.230 fused_ordering(838) 00:09:20.230 fused_ordering(839) 00:09:20.230 fused_ordering(840) 00:09:20.230 fused_ordering(841) 00:09:20.230 fused_ordering(842) 00:09:20.230 fused_ordering(843) 00:09:20.230 fused_ordering(844) 00:09:20.230 fused_ordering(845) 00:09:20.230 fused_ordering(846) 00:09:20.230 fused_ordering(847) 00:09:20.230 fused_ordering(848) 00:09:20.230 fused_ordering(849) 00:09:20.230 fused_ordering(850) 00:09:20.230 fused_ordering(851) 00:09:20.230 fused_ordering(852) 00:09:20.230 fused_ordering(853) 00:09:20.230 fused_ordering(854) 00:09:20.230 fused_ordering(855) 00:09:20.230 fused_ordering(856) 00:09:20.230 fused_ordering(857) 00:09:20.230 fused_ordering(858) 00:09:20.230 fused_ordering(859) 00:09:20.231 fused_ordering(860) 00:09:20.231 fused_ordering(861) 00:09:20.231 fused_ordering(862) 00:09:20.231 fused_ordering(863) 00:09:20.231 fused_ordering(864) 00:09:20.231 fused_ordering(865) 00:09:20.231 fused_ordering(866) 00:09:20.231 fused_ordering(867) 00:09:20.231 fused_ordering(868) 00:09:20.231 fused_ordering(869) 00:09:20.231 fused_ordering(870) 00:09:20.231 fused_ordering(871) 00:09:20.231 fused_ordering(872) 00:09:20.231 fused_ordering(873) 00:09:20.231 fused_ordering(874) 00:09:20.231 fused_ordering(875) 00:09:20.231 fused_ordering(876) 00:09:20.231 fused_ordering(877) 00:09:20.231 fused_ordering(878) 00:09:20.231 fused_ordering(879) 00:09:20.231 fused_ordering(880) 00:09:20.231 fused_ordering(881) 00:09:20.231 fused_ordering(882) 00:09:20.231 fused_ordering(883) 00:09:20.231 fused_ordering(884) 00:09:20.231 fused_ordering(885) 00:09:20.231 fused_ordering(886) 00:09:20.231 fused_ordering(887) 00:09:20.231 fused_ordering(888) 00:09:20.231 fused_ordering(889) 00:09:20.231 fused_ordering(890) 00:09:20.231 fused_ordering(891) 00:09:20.231 fused_ordering(892) 00:09:20.231 fused_ordering(893) 00:09:20.231 fused_ordering(894) 00:09:20.231 fused_ordering(895) 00:09:20.231 fused_ordering(896) 00:09:20.231 fused_ordering(897) 00:09:20.231 fused_ordering(898) 00:09:20.231 fused_ordering(899) 00:09:20.231 fused_ordering(900) 00:09:20.231 fused_ordering(901) 00:09:20.231 fused_ordering(902) 00:09:20.231 fused_ordering(903) 00:09:20.231 fused_ordering(904) 00:09:20.231 fused_ordering(905) 00:09:20.231 fused_ordering(906) 00:09:20.231 fused_ordering(907) 00:09:20.231 fused_ordering(908) 00:09:20.231 fused_ordering(909) 00:09:20.231 fused_ordering(910) 00:09:20.231 fused_ordering(911) 00:09:20.231 fused_ordering(912) 00:09:20.231 fused_ordering(913) 00:09:20.231 fused_ordering(914) 00:09:20.231 fused_ordering(915) 00:09:20.231 fused_ordering(916) 00:09:20.231 fused_ordering(917) 00:09:20.231 fused_ordering(918) 00:09:20.231 fused_ordering(919) 00:09:20.231 fused_ordering(920) 00:09:20.231 fused_ordering(921) 00:09:20.231 fused_ordering(922) 00:09:20.231 fused_ordering(923) 00:09:20.231 fused_ordering(924) 00:09:20.231 fused_ordering(925) 00:09:20.231 fused_ordering(926) 00:09:20.231 fused_ordering(927) 00:09:20.231 fused_ordering(928) 00:09:20.231 fused_ordering(929) 00:09:20.231 fused_ordering(930) 00:09:20.231 fused_ordering(931) 00:09:20.231 fused_ordering(932) 00:09:20.231 fused_ordering(933) 00:09:20.231 fused_ordering(934) 00:09:20.231 fused_ordering(935) 00:09:20.231 fused_ordering(936) 00:09:20.231 fused_ordering(937) 00:09:20.231 fused_ordering(938) 00:09:20.231 fused_ordering(939) 00:09:20.231 fused_ordering(940) 00:09:20.231 fused_ordering(941) 00:09:20.231 fused_ordering(942) 00:09:20.231 fused_ordering(943) 00:09:20.231 fused_ordering(944) 00:09:20.231 fused_ordering(945) 00:09:20.231 fused_ordering(946) 00:09:20.231 fused_ordering(947) 00:09:20.231 fused_ordering(948) 00:09:20.231 fused_ordering(949) 00:09:20.231 fused_ordering(950) 00:09:20.231 fused_ordering(951) 00:09:20.231 fused_ordering(952) 00:09:20.231 fused_ordering(953) 00:09:20.231 fused_ordering(954) 00:09:20.231 fused_ordering(955) 00:09:20.231 fused_ordering(956) 00:09:20.231 fused_ordering(957) 00:09:20.231 fused_ordering(958) 00:09:20.231 fused_ordering(959) 00:09:20.231 fused_ordering(960) 00:09:20.231 fused_ordering(961) 00:09:20.231 fused_ordering(962) 00:09:20.231 fused_ordering(963) 00:09:20.231 fused_ordering(964) 00:09:20.231 fused_ordering(965) 00:09:20.231 fused_ordering(966) 00:09:20.231 fused_ordering(967) 00:09:20.231 fused_ordering(968) 00:09:20.231 fused_ordering(969) 00:09:20.231 fused_ordering(970) 00:09:20.231 fused_ordering(971) 00:09:20.231 fused_ordering(972) 00:09:20.231 fused_ordering(973) 00:09:20.231 fused_ordering(974) 00:09:20.231 fused_ordering(975) 00:09:20.231 fused_ordering(976) 00:09:20.231 fused_ordering(977) 00:09:20.231 fused_ordering(978) 00:09:20.231 fused_ordering(979) 00:09:20.231 fused_ordering(980) 00:09:20.231 fused_ordering(981) 00:09:20.231 fused_ordering(982) 00:09:20.231 fused_ordering(983) 00:09:20.231 fused_ordering(984) 00:09:20.231 fused_ordering(985) 00:09:20.231 fused_ordering(986) 00:09:20.231 fused_ordering(987) 00:09:20.231 fused_ordering(988) 00:09:20.231 fused_ordering(989) 00:09:20.231 fused_ordering(990) 00:09:20.231 fused_ordering(991) 00:09:20.231 fused_ordering(992) 00:09:20.231 fused_ordering(993) 00:09:20.231 fused_ordering(994) 00:09:20.231 fused_ordering(995) 00:09:20.231 fused_ordering(996) 00:09:20.231 fused_ordering(997) 00:09:20.231 fused_ordering(998) 00:09:20.231 fused_ordering(999) 00:09:20.231 fused_ordering(1000) 00:09:20.231 fused_ordering(1001) 00:09:20.231 fused_ordering(1002) 00:09:20.231 fused_ordering(1003) 00:09:20.231 fused_ordering(1004) 00:09:20.231 fused_ordering(1005) 00:09:20.231 fused_ordering(1006) 00:09:20.231 fused_ordering(1007) 00:09:20.231 fused_ordering(1008) 00:09:20.231 fused_ordering(1009) 00:09:20.231 fused_ordering(1010) 00:09:20.231 fused_ordering(1011) 00:09:20.231 fused_ordering(1012) 00:09:20.231 fused_ordering(1013) 00:09:20.231 fused_ordering(1014) 00:09:20.231 fused_ordering(1015) 00:09:20.231 fused_ordering(1016) 00:09:20.231 fused_ordering(1017) 00:09:20.231 fused_ordering(1018) 00:09:20.231 fused_ordering(1019) 00:09:20.231 fused_ordering(1020) 00:09:20.231 fused_ordering(1021) 00:09:20.231 fused_ordering(1022) 00:09:20.231 fused_ordering(1023) 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.231 rmmod nvme_tcp 00:09:20.231 rmmod nvme_fabrics 00:09:20.231 rmmod nvme_keyring 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2575159 ']' 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2575159 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2575159 ']' 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2575159 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2575159 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2575159' 00:09:20.231 killing process with pid 2575159 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2575159 00:09:20.231 00:45:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2575159 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.800 00:45:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.701 00:45:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.701 00:09:22.701 real 0m9.250s 00:09:22.701 user 0m6.944s 00:09:22.701 sys 0m4.413s 00:09:22.701 00:45:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.701 00:45:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:22.701 ************************************ 00:09:22.701 END TEST nvmf_fused_ordering 00:09:22.701 ************************************ 00:09:22.701 00:45:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:22.701 00:45:57 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:22.701 00:45:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.701 00:45:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.701 00:45:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.701 ************************************ 00:09:22.701 START TEST nvmf_delete_subsystem 00:09:22.701 ************************************ 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:22.701 * Looking for test storage... 00:09:22.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.701 00:45:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:25.231 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:25.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:25.231 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:25.231 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:25.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:09:25.231 00:09:25.231 --- 10.0.0.2 ping statistics --- 00:09:25.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.231 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:09:25.231 00:09:25.231 --- 10.0.0.1 ping statistics --- 00:09:25.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.231 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.231 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2577648 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2577648 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2577648 ']' 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.232 00:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.232 [2024-07-16 00:45:59.685417] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:09:25.232 [2024-07-16 00:45:59.685509] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.232 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.232 [2024-07-16 00:45:59.749628] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.232 [2024-07-16 00:45:59.862715] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.232 [2024-07-16 00:45:59.862775] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.232 [2024-07-16 00:45:59.862801] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.232 [2024-07-16 00:45:59.862815] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.232 [2024-07-16 00:45:59.862826] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.232 [2024-07-16 00:45:59.862918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.232 [2024-07-16 00:45:59.862939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.163 [2024-07-16 00:46:00.694287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.163 [2024-07-16 00:46:00.710522] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.163 NULL1 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.163 Delay0 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2577804 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:26.163 00:46:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:26.163 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.163 [2024-07-16 00:46:00.785337] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:28.057 00:46:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.057 00:46:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.057 00:46:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 starting I/O failed: -6 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 starting I/O failed: -6 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 [2024-07-16 00:46:02.876633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3fa0000c00 is same with the state(5) to be set 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 Read completed with error (sct=0, sc=8) 00:09:28.315 starting I/O failed: -6 00:09:28.315 Write completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Write completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Write completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Write completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Write completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Write completed with error (sct=0, sc=8) 00:09:28.316 Write completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 Read completed with error (sct=0, sc=8) 00:09:28.316 starting I/O failed: -6 00:09:28.316 starting I/O failed: -6 00:09:28.316 starting I/O failed: -6 00:09:28.316 starting I/O failed: -6 00:09:29.246 [2024-07-16 00:46:03.845366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a7a70 is same with the state(5) to be set 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 [2024-07-16 00:46:03.877106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3fa000d020 is same with the state(5) to be set 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 [2024-07-16 00:46:03.877273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3fa000d6c0 is same with the state(5) to be set 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Write completed with error (sct=0, sc=8) 00:09:29.246 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 [2024-07-16 00:46:03.877733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a67a0 is same with the state(5) to be set 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 Write completed with error (sct=0, sc=8) 00:09:29.247 Read completed with error (sct=0, sc=8) 00:09:29.247 [2024-07-16 00:46:03.878012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a6e40 is same with the state(5) to be set 00:09:29.247 Initializing NVMe Controllers 00:09:29.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:29.247 Controller IO queue size 128, less than required. 00:09:29.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:29.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:29.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:29.247 Initialization complete. Launching workers. 00:09:29.247 ======================================================== 00:09:29.247 Latency(us) 00:09:29.247 Device Information : IOPS MiB/s Average min max 00:09:29.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.63 0.09 905403.86 770.20 1012397.56 00:09:29.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.30 0.08 913398.95 649.93 1011537.34 00:09:29.247 ======================================================== 00:09:29.247 Total : 345.93 0.17 909131.84 649.93 1012397.56 00:09:29.247 00:09:29.247 [2024-07-16 00:46:03.878818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a7a70 (9): Bad file descriptor 00:09:29.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:29.247 00:46:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.247 00:46:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:29.247 00:46:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2577804 00:09:29.247 00:46:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2577804 00:09:29.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2577804) - No such process 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2577804 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2577804 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2577804 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.813 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.814 [2024-07-16 00:46:04.401397] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2578206 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:29.814 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.814 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.814 [2024-07-16 00:46:04.467492] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:30.379 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.380 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:30.380 00:46:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.968 00:46:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.968 00:46:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:30.968 00:46:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.230 00:46:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.230 00:46:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:31.230 00:46:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.794 00:46:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.794 00:46:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:31.794 00:46:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.359 00:46:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.359 00:46:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:32.359 00:46:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.924 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.924 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:32.924 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.924 Initializing NVMe Controllers 00:09:32.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.924 Controller IO queue size 128, less than required. 00:09:32.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:32.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:32.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:32.924 Initialization complete. Launching workers. 00:09:32.924 ======================================================== 00:09:32.924 Latency(us) 00:09:32.924 Device Information : IOPS MiB/s Average min max 00:09:32.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003660.32 1000223.42 1042398.36 00:09:32.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005157.92 1000274.17 1011599.06 00:09:32.924 ======================================================== 00:09:32.924 Total : 256.00 0.12 1004409.12 1000223.42 1042398.36 00:09:32.924 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2578206 00:09:33.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2578206) - No such process 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2578206 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.182 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.440 rmmod nvme_tcp 00:09:33.440 rmmod nvme_fabrics 00:09:33.440 rmmod nvme_keyring 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2577648 ']' 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2577648 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2577648 ']' 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2577648 00:09:33.440 00:46:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:33.440 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.440 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2577648 00:09:33.440 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.440 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.440 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2577648' 00:09:33.440 killing process with pid 2577648 00:09:33.440 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2577648 00:09:33.440 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2577648 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.699 00:46:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.600 00:46:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.600 00:09:35.600 real 0m12.996s 00:09:35.600 user 0m29.192s 00:09:35.600 sys 0m3.056s 00:09:35.600 00:46:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.600 00:46:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:35.600 ************************************ 00:09:35.600 END TEST nvmf_delete_subsystem 00:09:35.600 ************************************ 00:09:35.859 00:46:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:35.859 00:46:10 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:35.859 00:46:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.859 00:46:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.859 00:46:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.859 ************************************ 00:09:35.859 START TEST nvmf_ns_masking 00:09:35.859 ************************************ 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:35.859 * Looking for test storage... 00:09:35.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ecb5de5a-5bbb-4709-83b4-32d8070e2504 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=69b20f8d-e6f8-4c1c-ae13-13411ea864f2 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b3eda375-2b9c-4280-8245-394d00bfdda2 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.859 00:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:37.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:37.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:37.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:37.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:37.758 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.759 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:09:38.016 00:09:38.016 --- 10.0.0.2 ping statistics --- 00:09:38.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.016 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:09:38.016 00:09:38.016 --- 10.0.0.1 ping statistics --- 00:09:38.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.016 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.016 00:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2580675 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2580675 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2580675 ']' 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.017 00:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:38.017 [2024-07-16 00:46:12.706430] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:09:38.017 [2024-07-16 00:46:12.706518] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.017 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.017 [2024-07-16 00:46:12.771746] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.274 [2024-07-16 00:46:12.887590] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.274 [2024-07-16 00:46:12.887648] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.274 [2024-07-16 00:46:12.887661] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.274 [2024-07-16 00:46:12.887672] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.274 [2024-07-16 00:46:12.887681] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.274 [2024-07-16 00:46:12.887714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.274 00:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.274 00:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:38.274 00:46:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.274 00:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.274 00:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:38.530 00:46:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.530 00:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:38.786 [2024-07-16 00:46:13.309965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.786 00:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:38.786 00:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:38.786 00:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:39.043 Malloc1 00:09:39.043 00:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:39.300 Malloc2 00:09:39.300 00:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.557 00:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:39.815 00:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.072 [2024-07-16 00:46:14.751495] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.072 00:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:40.072 00:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3eda375-2b9c-4280-8245-394d00bfdda2 -a 10.0.0.2 -s 4420 -i 4 00:09:40.329 00:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:40.329 00:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:40.329 00:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:40.329 00:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:40.329 00:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:42.239 00:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:42.497 [ 0]:0x1 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f2fbc4d7afb4870a9bd39683fcf5721 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f2fbc4d7afb4870a9bd39683fcf5721 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:42.497 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:42.754 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:42.754 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:42.754 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:42.754 [ 0]:0x1 00:09:42.754 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f2fbc4d7afb4870a9bd39683fcf5721 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f2fbc4d7afb4870a9bd39683fcf5721 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:42.755 [ 1]:0x2 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:42.755 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:43.012 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c60da59c96ea40a48c1518d2ebfd86fd 00:09:43.012 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c60da59c96ea40a48c1518d2ebfd86fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.012 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:43.012 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.012 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.269 00:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:43.527 00:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:43.527 00:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3eda375-2b9c-4280-8245-394d00bfdda2 -a 10.0.0.2 -s 4420 -i 4 00:09:43.785 00:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:43.785 00:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:43.785 00:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.785 00:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:43.785 00:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:43.785 00:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:45.684 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:45.944 [ 0]:0x2 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c60da59c96ea40a48c1518d2ebfd86fd 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c60da59c96ea40a48c1518d2ebfd86fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.944 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:46.268 [ 0]:0x1 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f2fbc4d7afb4870a9bd39683fcf5721 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f2fbc4d7afb4870a9bd39683fcf5721 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:46.268 [ 1]:0x2 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c60da59c96ea40a48c1518d2ebfd86fd 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c60da59c96ea40a48c1518d2ebfd86fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:46.268 00:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:46.526 [ 0]:0x2 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c60da59c96ea40a48c1518d2ebfd86fd 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c60da59c96ea40a48c1518d2ebfd86fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:46.526 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.784 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3eda375-2b9c-4280-8245-394d00bfdda2 -a 10.0.0.2 -s 4420 -i 4 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:47.041 00:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:49.563 [ 0]:0x1 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f2fbc4d7afb4870a9bd39683fcf5721 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f2fbc4d7afb4870a9bd39683fcf5721 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:49.563 [ 1]:0x2 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c60da59c96ea40a48c1518d2ebfd86fd 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c60da59c96ea40a48c1518d2ebfd86fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.563 00:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:49.563 [ 0]:0x2 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c60da59c96ea40a48c1518d2ebfd86fd 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c60da59c96ea40a48c1518d2ebfd86fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.563 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.564 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.564 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.564 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:49.564 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:49.821 [2024-07-16 00:46:24.476851] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:49.821 request: 00:09:49.821 { 00:09:49.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.821 "nsid": 2, 00:09:49.821 "host": "nqn.2016-06.io.spdk:host1", 00:09:49.821 "method": "nvmf_ns_remove_host", 00:09:49.821 "req_id": 1 00:09:49.821 } 00:09:49.821 Got JSON-RPC error response 00:09:49.821 response: 00:09:49.821 { 00:09:49.821 "code": -32602, 00:09:49.821 "message": "Invalid parameters" 00:09:49.821 } 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:49.821 [ 0]:0x2 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:49.821 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c60da59c96ea40a48c1518d2ebfd86fd 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c60da59c96ea40a48c1518d2ebfd86fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2582178 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2582178 /var/tmp/host.sock 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2582178 ']' 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:50.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.079 00:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:50.079 [2024-07-16 00:46:24.693460] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:09:50.079 [2024-07-16 00:46:24.693556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582178 ] 00:09:50.079 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.079 [2024-07-16 00:46:24.756671] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.337 [2024-07-16 00:46:24.877308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.267 00:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.267 00:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:51.267 00:46:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.267 00:46:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.524 00:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ecb5de5a-5bbb-4709-83b4-32d8070e2504 00:09:51.524 00:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:51.524 00:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ECB5DE5A5BBB470983B432D8070E2504 -i 00:09:51.782 00:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 69b20f8d-e6f8-4c1c-ae13-13411ea864f2 00:09:51.782 00:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:51.782 00:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 69B20F8DE6F84C1CAE1313411EA864F2 -i 00:09:52.039 00:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:52.296 00:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:52.860 00:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:52.860 00:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:53.117 nvme0n1 00:09:53.117 00:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:53.117 00:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:53.682 nvme1n2 00:09:53.682 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:53.682 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:53.682 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:53.682 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:53.682 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:53.939 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:53.939 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:53.939 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:53.939 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:54.196 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ecb5de5a-5bbb-4709-83b4-32d8070e2504 == \e\c\b\5\d\e\5\a\-\5\b\b\b\-\4\7\0\9\-\8\3\b\4\-\3\2\d\8\0\7\0\e\2\5\0\4 ]] 00:09:54.196 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:54.196 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:54.196 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:54.454 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 69b20f8d-e6f8-4c1c-ae13-13411ea864f2 == \6\9\b\2\0\f\8\d\-\e\6\f\8\-\4\c\1\c\-\a\e\1\3\-\1\3\4\1\1\e\a\8\6\4\f\2 ]] 00:09:54.454 00:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2582178 00:09:54.454 00:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2582178 ']' 00:09:54.454 00:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2582178 00:09:54.454 00:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:54.454 00:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.454 00:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2582178 00:09:54.454 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:54.454 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:54.454 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2582178' 00:09:54.454 killing process with pid 2582178 00:09:54.454 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2582178 00:09:54.454 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2582178 00:09:55.019 00:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.276 rmmod nvme_tcp 00:09:55.276 rmmod nvme_fabrics 00:09:55.276 rmmod nvme_keyring 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2580675 ']' 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2580675 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2580675 ']' 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2580675 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2580675 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2580675' 00:09:55.276 killing process with pid 2580675 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2580675 00:09:55.276 00:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2580675 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.535 00:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.087 00:46:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.087 00:09:58.087 real 0m21.881s 00:09:58.087 user 0m29.346s 00:09:58.087 sys 0m4.244s 00:09:58.087 00:46:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.087 00:46:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:58.087 ************************************ 00:09:58.087 END TEST nvmf_ns_masking 00:09:58.087 ************************************ 00:09:58.087 00:46:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:58.087 00:46:32 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:58.087 00:46:32 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:58.087 00:46:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:58.087 00:46:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.087 00:46:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.087 ************************************ 00:09:58.087 START TEST nvmf_nvme_cli 00:09:58.087 ************************************ 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:58.087 * Looking for test storage... 00:09:58.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.087 00:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:59.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:59.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:59.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:59.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:09:59.984 00:09:59.984 --- 10.0.0.2 ping statistics --- 00:09:59.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.984 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:59.984 00:09:59.984 --- 10.0.0.1 ping statistics --- 00:09:59.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.984 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2584801 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2584801 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2584801 ']' 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.984 00:46:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:59.984 [2024-07-16 00:46:34.665337] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:09:59.984 [2024-07-16 00:46:34.665442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.984 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.242 [2024-07-16 00:46:34.764597] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.242 [2024-07-16 00:46:34.924939] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.242 [2024-07-16 00:46:34.925017] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.242 [2024-07-16 00:46:34.925049] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.242 [2024-07-16 00:46:34.925075] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.242 [2024-07-16 00:46:34.925099] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.242 [2024-07-16 00:46:34.925202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.242 [2024-07-16 00:46:34.925264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.242 [2024-07-16 00:46:34.925323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.242 [2024-07-16 00:46:34.925334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.500 [2024-07-16 00:46:35.089698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.500 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.501 Malloc0 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.501 Malloc1 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.501 [2024-07-16 00:46:35.171017] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.501 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:00.758 00:10:00.758 Discovery Log Number of Records 2, Generation counter 2 00:10:00.758 =====Discovery Log Entry 0====== 00:10:00.758 trtype: tcp 00:10:00.758 adrfam: ipv4 00:10:00.758 subtype: current discovery subsystem 00:10:00.758 treq: not required 00:10:00.758 portid: 0 00:10:00.758 trsvcid: 4420 00:10:00.758 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:00.758 traddr: 10.0.0.2 00:10:00.758 eflags: explicit discovery connections, duplicate discovery information 00:10:00.758 sectype: none 00:10:00.758 =====Discovery Log Entry 1====== 00:10:00.758 trtype: tcp 00:10:00.758 adrfam: ipv4 00:10:00.758 subtype: nvme subsystem 00:10:00.758 treq: not required 00:10:00.758 portid: 0 00:10:00.758 trsvcid: 4420 00:10:00.758 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:00.758 traddr: 10.0.0.2 00:10:00.758 eflags: none 00:10:00.758 sectype: none 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:00.758 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:01.328 00:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:01.328 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:10:01.328 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.328 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:01.328 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:01.328 00:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:10:03.290 00:46:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:03.290 00:46:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:03.290 00:46:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.290 00:46:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:03.291 00:46:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.291 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:10:03.291 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:03.291 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:03.291 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.291 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:03.548 /dev/nvme0n1 ]] 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:03.548 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.806 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.064 rmmod nvme_tcp 00:10:04.064 rmmod nvme_fabrics 00:10:04.064 rmmod nvme_keyring 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:04.064 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2584801 ']' 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2584801 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2584801 ']' 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2584801 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2584801 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2584801' 00:10:04.065 killing process with pid 2584801 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2584801 00:10:04.065 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2584801 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.323 00:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.856 00:46:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.856 00:10:06.856 real 0m8.712s 00:10:06.856 user 0m16.669s 00:10:06.856 sys 0m2.322s 00:10:06.856 00:46:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.856 00:46:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:06.856 ************************************ 00:10:06.856 END TEST nvmf_nvme_cli 00:10:06.856 ************************************ 00:10:06.856 00:46:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:06.856 00:46:41 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:06.856 00:46:41 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:06.856 00:46:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:06.856 00:46:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.856 00:46:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.856 ************************************ 00:10:06.856 START TEST nvmf_vfio_user 00:10:06.856 ************************************ 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:06.857 * Looking for test storage... 00:10:06.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2585723 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2585723' 00:10:06.857 Process pid: 2585723 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2585723 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2585723 ']' 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 [2024-07-16 00:46:41.214665] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:10:06.857 [2024-07-16 00:46:41.214754] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.857 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.857 [2024-07-16 00:46:41.274049] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.857 [2024-07-16 00:46:41.382247] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.857 [2024-07-16 00:46:41.382308] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.857 [2024-07-16 00:46:41.382322] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.857 [2024-07-16 00:46:41.382333] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.857 [2024-07-16 00:46:41.382351] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.857 [2024-07-16 00:46:41.382504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.857 [2024-07-16 00:46:41.382571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.857 [2024-07-16 00:46:41.382636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.857 [2024-07-16 00:46:41.382639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:06.857 00:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:07.785 00:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:08.041 00:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:08.041 00:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:08.041 00:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:08.041 00:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:08.041 00:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:08.606 Malloc1 00:10:08.606 00:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:08.606 00:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:08.862 00:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:09.118 00:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:09.118 00:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:09.118 00:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:09.375 Malloc2 00:10:09.375 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:09.632 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:09.888 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:10.146 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:10.146 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:10.146 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:10.146 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:10.146 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:10.146 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:10.146 [2024-07-16 00:46:44.856753] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:10:10.146 [2024-07-16 00:46:44.856805] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586150 ] 00:10:10.146 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.146 [2024-07-16 00:46:44.892244] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:10.146 [2024-07-16 00:46:44.894738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:10.146 [2024-07-16 00:46:44.894766] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5be6ffb000 00:10:10.146 [2024-07-16 00:46:44.895735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.146 [2024-07-16 00:46:44.896730] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.146 [2024-07-16 00:46:44.897732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.146 [2024-07-16 00:46:44.898738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:10.146 [2024-07-16 00:46:44.899741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:10.146 [2024-07-16 00:46:44.900748] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.146 [2024-07-16 00:46:44.901758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:10.146 [2024-07-16 00:46:44.902759] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:10.405 [2024-07-16 00:46:44.903769] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:10.405 [2024-07-16 00:46:44.903799] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5be6ff0000 00:10:10.405 [2024-07-16 00:46:44.904969] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:10.405 [2024-07-16 00:46:44.919741] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:10.405 [2024-07-16 00:46:44.919778] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:10.405 [2024-07-16 00:46:44.924918] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:10.405 [2024-07-16 00:46:44.924989] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:10.405 [2024-07-16 00:46:44.925092] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:10.405 [2024-07-16 00:46:44.925124] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:10.405 [2024-07-16 00:46:44.925135] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:10.405 [2024-07-16 00:46:44.925913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:10.405 [2024-07-16 00:46:44.925946] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:10.405 [2024-07-16 00:46:44.925960] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:10.405 [2024-07-16 00:46:44.926923] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:10.405 [2024-07-16 00:46:44.926951] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:10.405 [2024-07-16 00:46:44.926964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:10.405 [2024-07-16 00:46:44.927930] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:10.405 [2024-07-16 00:46:44.927951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:10.405 [2024-07-16 00:46:44.928938] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:10.405 [2024-07-16 00:46:44.928968] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:10.405 [2024-07-16 00:46:44.928978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:10.405 [2024-07-16 00:46:44.928989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:10.405 [2024-07-16 00:46:44.929099] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:10.405 [2024-07-16 00:46:44.929107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:10.405 [2024-07-16 00:46:44.929116] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:10.405 [2024-07-16 00:46:44.929949] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:10.405 [2024-07-16 00:46:44.930947] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:10.405 [2024-07-16 00:46:44.931947] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:10.405 [2024-07-16 00:46:44.932943] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:10.405 [2024-07-16 00:46:44.933035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:10.405 [2024-07-16 00:46:44.933959] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:10.405 [2024-07-16 00:46:44.933979] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:10.405 [2024-07-16 00:46:44.933988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934013] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:10.405 [2024-07-16 00:46:44.934027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934056] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:10.405 [2024-07-16 00:46:44.934067] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.405 [2024-07-16 00:46:44.934088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.405 [2024-07-16 00:46:44.934142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:10.405 [2024-07-16 00:46:44.934161] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:10.405 [2024-07-16 00:46:44.934189] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:10.405 [2024-07-16 00:46:44.934198] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:10.405 [2024-07-16 00:46:44.934206] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:10.405 [2024-07-16 00:46:44.934214] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:10.405 [2024-07-16 00:46:44.934222] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:10.405 [2024-07-16 00:46:44.934230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:10.405 [2024-07-16 00:46:44.934294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:10.405 [2024-07-16 00:46:44.934312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.405 [2024-07-16 00:46:44.934324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.405 [2024-07-16 00:46:44.934336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.405 [2024-07-16 00:46:44.934348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.405 [2024-07-16 00:46:44.934356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:10.405 [2024-07-16 00:46:44.934398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:10.405 [2024-07-16 00:46:44.934410] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:10.405 [2024-07-16 00:46:44.934418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:10.405 [2024-07-16 00:46:44.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:10.405 [2024-07-16 00:46:44.934540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934571] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:10.405 [2024-07-16 00:46:44.934579] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:10.405 [2024-07-16 00:46:44.934589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:10.405 [2024-07-16 00:46:44.934603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:10.405 [2024-07-16 00:46:44.934621] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:10.405 [2024-07-16 00:46:44.934643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934671] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:10.405 [2024-07-16 00:46:44.934679] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.405 [2024-07-16 00:46:44.934688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.405 [2024-07-16 00:46:44.934710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:10.405 [2024-07-16 00:46:44.934735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:10.405 [2024-07-16 00:46:44.934763] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:10.405 [2024-07-16 00:46:44.934771] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.405 [2024-07-16 00:46:44.934781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.934794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.934809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:10.406 [2024-07-16 00:46:44.934820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:10.406 [2024-07-16 00:46:44.934834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:10.406 [2024-07-16 00:46:44.934845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:10.406 [2024-07-16 00:46:44.934853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:10.406 [2024-07-16 00:46:44.934886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:10.406 [2024-07-16 00:46:44.934901] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:10.406 [2024-07-16 00:46:44.934909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:10.406 [2024-07-16 00:46:44.934928] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:10.406 [2024-07-16 00:46:44.934956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.934975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.934995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.935007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.935024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.935039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.935055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.935066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.935090] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:10.406 [2024-07-16 00:46:44.935101] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:10.406 [2024-07-16 00:46:44.935107] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:10.406 [2024-07-16 00:46:44.935113] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:10.406 [2024-07-16 00:46:44.935122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:10.406 [2024-07-16 00:46:44.935134] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:10.406 [2024-07-16 00:46:44.935143] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:10.406 [2024-07-16 00:46:44.935152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.935178] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:10.406 [2024-07-16 00:46:44.935187] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:10.406 [2024-07-16 00:46:44.935196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.935208] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:10.406 [2024-07-16 00:46:44.935216] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:10.406 [2024-07-16 00:46:44.935225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:10.406 [2024-07-16 00:46:44.935236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.935275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:10.406 [2024-07-16 00:46:44.935291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:10.406 ===================================================== 00:10:10.406 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:10.406 ===================================================== 00:10:10.406 Controller Capabilities/Features 00:10:10.406 ================================ 00:10:10.406 Vendor ID: 4e58 00:10:10.406 Subsystem Vendor ID: 4e58 00:10:10.406 Serial Number: SPDK1 00:10:10.406 Model Number: SPDK bdev Controller 00:10:10.406 Firmware Version: 24.09 00:10:10.406 Recommended Arb Burst: 6 00:10:10.406 IEEE OUI Identifier: 8d 6b 50 00:10:10.406 Multi-path I/O 00:10:10.406 May have multiple subsystem ports: Yes 00:10:10.406 May have multiple controllers: Yes 00:10:10.406 Associated with SR-IOV VF: No 00:10:10.406 Max Data Transfer Size: 131072 00:10:10.406 Max Number of Namespaces: 32 00:10:10.406 Max Number of I/O Queues: 127 00:10:10.406 NVMe Specification Version (VS): 1.3 00:10:10.406 NVMe Specification Version (Identify): 1.3 00:10:10.406 Maximum Queue Entries: 256 00:10:10.406 Contiguous Queues Required: Yes 00:10:10.406 Arbitration Mechanisms Supported 00:10:10.406 Weighted Round Robin: Not Supported 00:10:10.406 Vendor Specific: Not Supported 00:10:10.406 Reset Timeout: 15000 ms 00:10:10.406 Doorbell Stride: 4 bytes 00:10:10.406 NVM Subsystem Reset: Not Supported 00:10:10.406 Command Sets Supported 00:10:10.406 NVM Command Set: Supported 00:10:10.406 Boot Partition: Not Supported 00:10:10.406 Memory Page Size Minimum: 4096 bytes 00:10:10.406 Memory Page Size Maximum: 4096 bytes 00:10:10.406 Persistent Memory Region: Not Supported 00:10:10.406 Optional Asynchronous Events Supported 00:10:10.406 Namespace Attribute Notices: Supported 00:10:10.406 Firmware Activation Notices: Not Supported 00:10:10.406 ANA Change Notices: Not Supported 00:10:10.406 PLE Aggregate Log Change Notices: Not Supported 00:10:10.406 LBA Status Info Alert Notices: Not Supported 00:10:10.406 EGE Aggregate Log Change Notices: Not Supported 00:10:10.406 Normal NVM Subsystem Shutdown event: Not Supported 00:10:10.406 Zone Descriptor Change Notices: Not Supported 00:10:10.406 Discovery Log Change Notices: Not Supported 00:10:10.406 Controller Attributes 00:10:10.406 128-bit Host Identifier: Supported 00:10:10.406 Non-Operational Permissive Mode: Not Supported 00:10:10.406 NVM Sets: Not Supported 00:10:10.406 Read Recovery Levels: Not Supported 00:10:10.406 Endurance Groups: Not Supported 00:10:10.406 Predictable Latency Mode: Not Supported 00:10:10.406 Traffic Based Keep ALive: Not Supported 00:10:10.406 Namespace Granularity: Not Supported 00:10:10.406 SQ Associations: Not Supported 00:10:10.406 UUID List: Not Supported 00:10:10.406 Multi-Domain Subsystem: Not Supported 00:10:10.406 Fixed Capacity Management: Not Supported 00:10:10.406 Variable Capacity Management: Not Supported 00:10:10.406 Delete Endurance Group: Not Supported 00:10:10.406 Delete NVM Set: Not Supported 00:10:10.406 Extended LBA Formats Supported: Not Supported 00:10:10.406 Flexible Data Placement Supported: Not Supported 00:10:10.406 00:10:10.406 Controller Memory Buffer Support 00:10:10.406 ================================ 00:10:10.406 Supported: No 00:10:10.406 00:10:10.406 Persistent Memory Region Support 00:10:10.406 ================================ 00:10:10.406 Supported: No 00:10:10.406 00:10:10.406 Admin Command Set Attributes 00:10:10.406 ============================ 00:10:10.406 Security Send/Receive: Not Supported 00:10:10.406 Format NVM: Not Supported 00:10:10.406 Firmware Activate/Download: Not Supported 00:10:10.406 Namespace Management: Not Supported 00:10:10.406 Device Self-Test: Not Supported 00:10:10.406 Directives: Not Supported 00:10:10.406 NVMe-MI: Not Supported 00:10:10.406 Virtualization Management: Not Supported 00:10:10.406 Doorbell Buffer Config: Not Supported 00:10:10.406 Get LBA Status Capability: Not Supported 00:10:10.406 Command & Feature Lockdown Capability: Not Supported 00:10:10.406 Abort Command Limit: 4 00:10:10.406 Async Event Request Limit: 4 00:10:10.406 Number of Firmware Slots: N/A 00:10:10.406 Firmware Slot 1 Read-Only: N/A 00:10:10.406 Firmware Activation Without Reset: N/A 00:10:10.406 Multiple Update Detection Support: N/A 00:10:10.406 Firmware Update Granularity: No Information Provided 00:10:10.406 Per-Namespace SMART Log: No 00:10:10.406 Asymmetric Namespace Access Log Page: Not Supported 00:10:10.406 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:10.406 Command Effects Log Page: Supported 00:10:10.406 Get Log Page Extended Data: Supported 00:10:10.406 Telemetry Log Pages: Not Supported 00:10:10.406 Persistent Event Log Pages: Not Supported 00:10:10.406 Supported Log Pages Log Page: May Support 00:10:10.406 Commands Supported & Effects Log Page: Not Supported 00:10:10.406 Feature Identifiers & Effects Log Page:May Support 00:10:10.406 NVMe-MI Commands & Effects Log Page: May Support 00:10:10.406 Data Area 4 for Telemetry Log: Not Supported 00:10:10.406 Error Log Page Entries Supported: 128 00:10:10.406 Keep Alive: Supported 00:10:10.406 Keep Alive Granularity: 10000 ms 00:10:10.406 00:10:10.406 NVM Command Set Attributes 00:10:10.406 ========================== 00:10:10.406 Submission Queue Entry Size 00:10:10.406 Max: 64 00:10:10.406 Min: 64 00:10:10.406 Completion Queue Entry Size 00:10:10.406 Max: 16 00:10:10.406 Min: 16 00:10:10.406 Number of Namespaces: 32 00:10:10.406 Compare Command: Supported 00:10:10.407 Write Uncorrectable Command: Not Supported 00:10:10.407 Dataset Management Command: Supported 00:10:10.407 Write Zeroes Command: Supported 00:10:10.407 Set Features Save Field: Not Supported 00:10:10.407 Reservations: Not Supported 00:10:10.407 Timestamp: Not Supported 00:10:10.407 Copy: Supported 00:10:10.407 Volatile Write Cache: Present 00:10:10.407 Atomic Write Unit (Normal): 1 00:10:10.407 Atomic Write Unit (PFail): 1 00:10:10.407 Atomic Compare & Write Unit: 1 00:10:10.407 Fused Compare & Write: Supported 00:10:10.407 Scatter-Gather List 00:10:10.407 SGL Command Set: Supported (Dword aligned) 00:10:10.407 SGL Keyed: Not Supported 00:10:10.407 SGL Bit Bucket Descriptor: Not Supported 00:10:10.407 SGL Metadata Pointer: Not Supported 00:10:10.407 Oversized SGL: Not Supported 00:10:10.407 SGL Metadata Address: Not Supported 00:10:10.407 SGL Offset: Not Supported 00:10:10.407 Transport SGL Data Block: Not Supported 00:10:10.407 Replay Protected Memory Block: Not Supported 00:10:10.407 00:10:10.407 Firmware Slot Information 00:10:10.407 ========================= 00:10:10.407 Active slot: 1 00:10:10.407 Slot 1 Firmware Revision: 24.09 00:10:10.407 00:10:10.407 00:10:10.407 Commands Supported and Effects 00:10:10.407 ============================== 00:10:10.407 Admin Commands 00:10:10.407 -------------- 00:10:10.407 Get Log Page (02h): Supported 00:10:10.407 Identify (06h): Supported 00:10:10.407 Abort (08h): Supported 00:10:10.407 Set Features (09h): Supported 00:10:10.407 Get Features (0Ah): Supported 00:10:10.407 Asynchronous Event Request (0Ch): Supported 00:10:10.407 Keep Alive (18h): Supported 00:10:10.407 I/O Commands 00:10:10.407 ------------ 00:10:10.407 Flush (00h): Supported LBA-Change 00:10:10.407 Write (01h): Supported LBA-Change 00:10:10.407 Read (02h): Supported 00:10:10.407 Compare (05h): Supported 00:10:10.407 Write Zeroes (08h): Supported LBA-Change 00:10:10.407 Dataset Management (09h): Supported LBA-Change 00:10:10.407 Copy (19h): Supported LBA-Change 00:10:10.407 00:10:10.407 Error Log 00:10:10.407 ========= 00:10:10.407 00:10:10.407 Arbitration 00:10:10.407 =========== 00:10:10.407 Arbitration Burst: 1 00:10:10.407 00:10:10.407 Power Management 00:10:10.407 ================ 00:10:10.407 Number of Power States: 1 00:10:10.407 Current Power State: Power State #0 00:10:10.407 Power State #0: 00:10:10.407 Max Power: 0.00 W 00:10:10.407 Non-Operational State: Operational 00:10:10.407 Entry Latency: Not Reported 00:10:10.407 Exit Latency: Not Reported 00:10:10.407 Relative Read Throughput: 0 00:10:10.407 Relative Read Latency: 0 00:10:10.407 Relative Write Throughput: 0 00:10:10.407 Relative Write Latency: 0 00:10:10.407 Idle Power: Not Reported 00:10:10.407 Active Power: Not Reported 00:10:10.407 Non-Operational Permissive Mode: Not Supported 00:10:10.407 00:10:10.407 Health Information 00:10:10.407 ================== 00:10:10.407 Critical Warnings: 00:10:10.407 Available Spare Space: OK 00:10:10.407 Temperature: OK 00:10:10.407 Device Reliability: OK 00:10:10.407 Read Only: No 00:10:10.407 Volatile Memory Backup: OK 00:10:10.407 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:10.407 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:10.407 Available Spare: 0% 00:10:10.407 Available Sp[2024-07-16 00:46:44.935415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:10.407 [2024-07-16 00:46:44.935431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:10.407 [2024-07-16 00:46:44.935477] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:10.407 [2024-07-16 00:46:44.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.407 [2024-07-16 00:46:44.935507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.407 [2024-07-16 00:46:44.935517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.407 [2024-07-16 00:46:44.935526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.407 [2024-07-16 00:46:44.939887] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:10.407 [2024-07-16 00:46:44.939911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:10.407 [2024-07-16 00:46:44.939979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:10.407 [2024-07-16 00:46:44.940050] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:10.407 [2024-07-16 00:46:44.940065] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:10.407 [2024-07-16 00:46:44.940993] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:10.407 [2024-07-16 00:46:44.941017] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:10.407 [2024-07-16 00:46:44.941073] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:10.407 [2024-07-16 00:46:44.943034] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:10.407 are Threshold: 0% 00:10:10.407 Life Percentage Used: 0% 00:10:10.407 Data Units Read: 0 00:10:10.407 Data Units Written: 0 00:10:10.407 Host Read Commands: 0 00:10:10.407 Host Write Commands: 0 00:10:10.407 Controller Busy Time: 0 minutes 00:10:10.407 Power Cycles: 0 00:10:10.407 Power On Hours: 0 hours 00:10:10.407 Unsafe Shutdowns: 0 00:10:10.407 Unrecoverable Media Errors: 0 00:10:10.407 Lifetime Error Log Entries: 0 00:10:10.407 Warning Temperature Time: 0 minutes 00:10:10.407 Critical Temperature Time: 0 minutes 00:10:10.407 00:10:10.407 Number of Queues 00:10:10.407 ================ 00:10:10.407 Number of I/O Submission Queues: 127 00:10:10.407 Number of I/O Completion Queues: 127 00:10:10.407 00:10:10.407 Active Namespaces 00:10:10.407 ================= 00:10:10.407 Namespace ID:1 00:10:10.407 Error Recovery Timeout: Unlimited 00:10:10.407 Command Set Identifier: NVM (00h) 00:10:10.407 Deallocate: Supported 00:10:10.407 Deallocated/Unwritten Error: Not Supported 00:10:10.407 Deallocated Read Value: Unknown 00:10:10.407 Deallocate in Write Zeroes: Not Supported 00:10:10.407 Deallocated Guard Field: 0xFFFF 00:10:10.407 Flush: Supported 00:10:10.407 Reservation: Supported 00:10:10.407 Namespace Sharing Capabilities: Multiple Controllers 00:10:10.407 Size (in LBAs): 131072 (0GiB) 00:10:10.407 Capacity (in LBAs): 131072 (0GiB) 00:10:10.407 Utilization (in LBAs): 131072 (0GiB) 00:10:10.407 NGUID: 8A2A5C211F4347699D7EFF3B0EC5D315 00:10:10.407 UUID: 8a2a5c21-1f43-4769-9d7e-ff3b0ec5d315 00:10:10.407 Thin Provisioning: Not Supported 00:10:10.407 Per-NS Atomic Units: Yes 00:10:10.407 Atomic Boundary Size (Normal): 0 00:10:10.407 Atomic Boundary Size (PFail): 0 00:10:10.407 Atomic Boundary Offset: 0 00:10:10.407 Maximum Single Source Range Length: 65535 00:10:10.407 Maximum Copy Length: 65535 00:10:10.407 Maximum Source Range Count: 1 00:10:10.407 NGUID/EUI64 Never Reused: No 00:10:10.407 Namespace Write Protected: No 00:10:10.407 Number of LBA Formats: 1 00:10:10.407 Current LBA Format: LBA Format #00 00:10:10.407 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:10.407 00:10:10.407 00:46:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:10.407 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.664 [2024-07-16 00:46:45.176682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:15.924 Initializing NVMe Controllers 00:10:15.924 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:15.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:15.924 Initialization complete. Launching workers. 00:10:15.924 ======================================================== 00:10:15.925 Latency(us) 00:10:15.925 Device Information : IOPS MiB/s Average min max 00:10:15.925 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34273.53 133.88 3734.07 1176.99 7313.12 00:10:15.925 ======================================================== 00:10:15.925 Total : 34273.53 133.88 3734.07 1176.99 7313.12 00:10:15.925 00:10:15.925 [2024-07-16 00:46:50.197070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:15.925 00:46:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:15.925 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.925 [2024-07-16 00:46:50.443276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:21.181 Initializing NVMe Controllers 00:10:21.181 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:21.181 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:21.181 Initialization complete. Launching workers. 00:10:21.181 ======================================================== 00:10:21.181 Latency(us) 00:10:21.181 Device Information : IOPS MiB/s Average min max 00:10:21.181 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.18 62.70 7982.78 7359.51 10943.53 00:10:21.181 ======================================================== 00:10:21.181 Total : 16051.18 62.70 7982.78 7359.51 10943.53 00:10:21.181 00:10:21.181 [2024-07-16 00:46:55.480884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:21.181 00:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:21.181 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.181 [2024-07-16 00:46:55.692908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:26.508 [2024-07-16 00:47:00.765237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:26.508 Initializing NVMe Controllers 00:10:26.508 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:26.508 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:26.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:26.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:26.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:26.508 Initialization complete. Launching workers. 00:10:26.508 Starting thread on core 2 00:10:26.508 Starting thread on core 3 00:10:26.508 Starting thread on core 1 00:10:26.508 00:47:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:26.508 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.508 [2024-07-16 00:47:01.059991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:29.790 [2024-07-16 00:47:04.521144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:30.047 Initializing NVMe Controllers 00:10:30.048 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.048 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.048 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:30.048 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:30.048 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:30.048 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:30.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:30.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:30.048 Initialization complete. Launching workers. 00:10:30.048 Starting thread on core 1 with urgent priority queue 00:10:30.048 Starting thread on core 2 with urgent priority queue 00:10:30.048 Starting thread on core 3 with urgent priority queue 00:10:30.048 Starting thread on core 0 with urgent priority queue 00:10:30.048 SPDK bdev Controller (SPDK1 ) core 0: 2776.67 IO/s 36.01 secs/100000 ios 00:10:30.048 SPDK bdev Controller (SPDK1 ) core 1: 2696.33 IO/s 37.09 secs/100000 ios 00:10:30.048 SPDK bdev Controller (SPDK1 ) core 2: 2475.67 IO/s 40.39 secs/100000 ios 00:10:30.048 SPDK bdev Controller (SPDK1 ) core 3: 2673.33 IO/s 37.41 secs/100000 ios 00:10:30.048 ======================================================== 00:10:30.048 00:10:30.048 00:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:30.048 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.305 [2024-07-16 00:47:04.824402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:30.305 Initializing NVMe Controllers 00:10:30.305 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.305 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:30.305 Namespace ID: 1 size: 0GB 00:10:30.305 Initialization complete. 00:10:30.305 INFO: using host memory buffer for IO 00:10:30.305 Hello world! 00:10:30.305 [2024-07-16 00:47:04.858961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:30.305 00:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:30.305 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.563 [2024-07-16 00:47:05.147335] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:31.495 Initializing NVMe Controllers 00:10:31.495 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:31.495 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:31.495 Initialization complete. Launching workers. 00:10:31.495 submit (in ns) avg, min, max = 7357.2, 3537.8, 4014430.0 00:10:31.495 complete (in ns) avg, min, max = 24768.8, 2062.2, 5994870.0 00:10:31.495 00:10:31.496 Submit histogram 00:10:31.496 ================ 00:10:31.496 Range in us Cumulative Count 00:10:31.496 3.532 - 3.556: 0.1336% ( 18) 00:10:31.496 3.556 - 3.579: 0.8460% ( 96) 00:10:31.496 3.579 - 3.603: 3.1985% ( 317) 00:10:31.496 3.603 - 3.627: 7.1837% ( 537) 00:10:31.496 3.627 - 3.650: 15.8071% ( 1162) 00:10:31.496 3.650 - 3.674: 25.0093% ( 1240) 00:10:31.496 3.674 - 3.698: 34.0853% ( 1223) 00:10:31.496 3.698 - 3.721: 41.7514% ( 1033) 00:10:31.496 3.721 - 3.745: 48.9573% ( 971) 00:10:31.496 3.745 - 3.769: 55.0724% ( 824) 00:10:31.496 3.769 - 3.793: 60.4304% ( 722) 00:10:31.496 3.793 - 3.816: 64.8312% ( 593) 00:10:31.496 3.816 - 3.840: 68.6160% ( 510) 00:10:31.496 3.840 - 3.864: 72.5269% ( 527) 00:10:31.496 3.864 - 3.887: 76.1336% ( 486) 00:10:31.496 3.887 - 3.911: 79.4360% ( 445) 00:10:31.496 3.911 - 3.935: 83.0872% ( 492) 00:10:31.496 3.935 - 3.959: 85.5213% ( 328) 00:10:31.496 3.959 - 3.982: 87.5176% ( 269) 00:10:31.496 3.982 - 4.006: 89.4471% ( 260) 00:10:31.496 4.006 - 4.030: 90.9165% ( 198) 00:10:31.496 4.030 - 4.053: 92.3191% ( 189) 00:10:31.496 4.053 - 4.077: 93.5807% ( 170) 00:10:31.496 4.077 - 4.101: 94.4861% ( 122) 00:10:31.496 4.101 - 4.124: 95.1763% ( 93) 00:10:31.496 4.124 - 4.148: 95.9109% ( 99) 00:10:31.496 4.148 - 4.172: 96.4082% ( 67) 00:10:31.496 4.172 - 4.196: 96.6753% ( 36) 00:10:31.496 4.196 - 4.219: 96.8980% ( 30) 00:10:31.496 4.219 - 4.243: 97.0390% ( 19) 00:10:31.496 4.243 - 4.267: 97.1800% ( 19) 00:10:31.496 4.267 - 4.290: 97.2839% ( 14) 00:10:31.496 4.290 - 4.314: 97.3803% ( 13) 00:10:31.496 4.314 - 4.338: 97.4545% ( 10) 00:10:31.496 4.338 - 4.361: 97.5065% ( 7) 00:10:31.496 4.361 - 4.385: 97.5881% ( 11) 00:10:31.496 4.385 - 4.409: 97.6252% ( 5) 00:10:31.496 4.409 - 4.433: 97.6475% ( 3) 00:10:31.496 4.433 - 4.456: 97.6772% ( 4) 00:10:31.496 4.456 - 4.480: 97.6920% ( 2) 00:10:31.496 4.480 - 4.504: 97.7069% ( 2) 00:10:31.496 4.504 - 4.527: 97.7143% ( 1) 00:10:31.496 4.551 - 4.575: 97.7291% ( 2) 00:10:31.496 4.575 - 4.599: 97.7365% ( 1) 00:10:31.496 4.599 - 4.622: 97.7440% ( 1) 00:10:31.496 4.622 - 4.646: 97.7588% ( 2) 00:10:31.496 4.646 - 4.670: 97.7662% ( 1) 00:10:31.496 4.693 - 4.717: 97.7885% ( 3) 00:10:31.496 4.717 - 4.741: 97.8033% ( 2) 00:10:31.496 4.741 - 4.764: 97.8256% ( 3) 00:10:31.496 4.764 - 4.788: 97.8330% ( 1) 00:10:31.496 4.788 - 4.812: 97.8479% ( 2) 00:10:31.496 4.812 - 4.836: 97.8850% ( 5) 00:10:31.496 4.836 - 4.859: 97.9295% ( 6) 00:10:31.496 4.859 - 4.883: 97.9814% ( 7) 00:10:31.496 4.883 - 4.907: 98.0557% ( 10) 00:10:31.496 4.907 - 4.930: 98.1002% ( 6) 00:10:31.496 4.930 - 4.954: 98.2263% ( 17) 00:10:31.496 4.954 - 4.978: 98.2857% ( 8) 00:10:31.496 4.978 - 5.001: 98.3525% ( 9) 00:10:31.496 5.001 - 5.025: 98.3673% ( 2) 00:10:31.496 5.025 - 5.049: 98.4119% ( 6) 00:10:31.496 5.049 - 5.073: 98.4490% ( 5) 00:10:31.496 5.073 - 5.096: 98.4712% ( 3) 00:10:31.496 5.096 - 5.120: 98.4861% ( 2) 00:10:31.496 5.120 - 5.144: 98.5009% ( 2) 00:10:31.496 5.144 - 5.167: 98.5380% ( 5) 00:10:31.496 5.191 - 5.215: 98.5455% ( 1) 00:10:31.496 5.215 - 5.239: 98.5677% ( 3) 00:10:31.496 5.239 - 5.262: 98.5751% ( 1) 00:10:31.496 5.262 - 5.286: 98.5826% ( 1) 00:10:31.496 5.428 - 5.452: 98.5974% ( 2) 00:10:31.496 5.713 - 5.736: 98.6048% ( 1) 00:10:31.496 5.926 - 5.950: 98.6122% ( 1) 00:10:31.496 5.973 - 5.997: 98.6197% ( 1) 00:10:31.496 6.116 - 6.163: 98.6271% ( 1) 00:10:31.496 6.400 - 6.447: 98.6345% ( 1) 00:10:31.496 6.779 - 6.827: 98.6419% ( 1) 00:10:31.496 6.874 - 6.921: 98.6494% ( 1) 00:10:31.496 7.016 - 7.064: 98.6568% ( 1) 00:10:31.496 7.301 - 7.348: 98.6642% ( 1) 00:10:31.496 7.348 - 7.396: 98.6716% ( 1) 00:10:31.496 7.396 - 7.443: 98.6790% ( 1) 00:10:31.496 7.490 - 7.538: 98.6865% ( 1) 00:10:31.496 7.585 - 7.633: 98.6939% ( 1) 00:10:31.496 7.680 - 7.727: 98.7013% ( 1) 00:10:31.496 7.775 - 7.822: 98.7161% ( 2) 00:10:31.496 7.822 - 7.870: 98.7236% ( 1) 00:10:31.496 7.870 - 7.917: 98.7310% ( 1) 00:10:31.496 7.917 - 7.964: 98.7384% ( 1) 00:10:31.496 8.012 - 8.059: 98.7458% ( 1) 00:10:31.496 8.059 - 8.107: 98.7532% ( 1) 00:10:31.496 8.107 - 8.154: 98.7607% ( 1) 00:10:31.496 8.296 - 8.344: 98.7755% ( 2) 00:10:31.496 8.391 - 8.439: 98.7829% ( 1) 00:10:31.496 8.439 - 8.486: 98.7904% ( 1) 00:10:31.496 8.486 - 8.533: 98.7978% ( 1) 00:10:31.496 8.581 - 8.628: 98.8126% ( 2) 00:10:31.496 8.818 - 8.865: 98.8200% ( 1) 00:10:31.496 8.865 - 8.913: 98.8275% ( 1) 00:10:31.496 8.960 - 9.007: 98.8349% ( 1) 00:10:31.496 9.102 - 9.150: 98.8571% ( 3) 00:10:31.496 9.529 - 9.576: 98.8646% ( 1) 00:10:31.496 9.576 - 9.624: 98.8720% ( 1) 00:10:31.496 9.671 - 9.719: 98.8794% ( 1) 00:10:31.496 9.861 - 9.908: 98.8942% ( 2) 00:10:31.496 9.956 - 10.003: 98.9017% ( 1) 00:10:31.496 10.003 - 10.050: 98.9091% ( 1) 00:10:31.496 10.098 - 10.145: 98.9165% ( 1) 00:10:31.496 10.193 - 10.240: 98.9239% ( 1) 00:10:31.496 10.619 - 10.667: 98.9314% ( 1) 00:10:31.496 10.667 - 10.714: 98.9388% ( 1) 00:10:31.496 10.856 - 10.904: 98.9462% ( 1) 00:10:31.496 11.093 - 11.141: 98.9536% ( 1) 00:10:31.496 11.283 - 11.330: 98.9610% ( 1) 00:10:31.496 11.615 - 11.662: 98.9685% ( 1) 00:10:31.496 11.710 - 11.757: 98.9759% ( 1) 00:10:31.496 11.852 - 11.899: 98.9833% ( 1) 00:10:31.496 11.947 - 11.994: 98.9981% ( 2) 00:10:31.496 12.231 - 12.326: 99.0056% ( 1) 00:10:31.496 12.326 - 12.421: 99.0204% ( 2) 00:10:31.496 12.895 - 12.990: 99.0278% ( 1) 00:10:31.496 12.990 - 13.084: 99.0353% ( 1) 00:10:31.496 13.274 - 13.369: 99.0427% ( 1) 00:10:31.496 13.653 - 13.748: 99.0501% ( 1) 00:10:31.496 13.843 - 13.938: 99.0575% ( 1) 00:10:31.496 13.938 - 14.033: 99.0649% ( 1) 00:10:31.496 14.127 - 14.222: 99.0724% ( 1) 00:10:31.496 14.317 - 14.412: 99.0798% ( 1) 00:10:31.496 14.412 - 14.507: 99.0872% ( 1) 00:10:31.496 17.161 - 17.256: 99.0946% ( 1) 00:10:31.496 17.256 - 17.351: 99.1095% ( 2) 00:10:31.496 17.351 - 17.446: 99.1466% ( 5) 00:10:31.496 17.446 - 17.541: 99.1614% ( 2) 00:10:31.496 17.541 - 17.636: 99.1763% ( 2) 00:10:31.496 17.636 - 17.730: 99.2430% ( 9) 00:10:31.496 17.730 - 17.825: 99.2727% ( 4) 00:10:31.496 17.825 - 17.920: 99.3173% ( 6) 00:10:31.496 17.920 - 18.015: 99.3692% ( 7) 00:10:31.496 18.015 - 18.110: 99.4137% ( 6) 00:10:31.496 18.110 - 18.204: 99.4360% ( 3) 00:10:31.496 18.204 - 18.299: 99.5028% ( 9) 00:10:31.496 18.299 - 18.394: 99.5993% ( 13) 00:10:31.496 18.394 - 18.489: 99.6586% ( 8) 00:10:31.496 18.489 - 18.584: 99.6957% ( 5) 00:10:31.496 18.584 - 18.679: 99.7477% ( 7) 00:10:31.496 18.679 - 18.773: 99.7699% ( 3) 00:10:31.496 18.773 - 18.868: 99.7774% ( 1) 00:10:31.496 18.868 - 18.963: 99.8071% ( 4) 00:10:31.496 18.963 - 19.058: 99.8219% ( 2) 00:10:31.496 19.058 - 19.153: 99.8442% ( 3) 00:10:31.496 19.247 - 19.342: 99.8516% ( 1) 00:10:31.496 19.342 - 19.437: 99.8590% ( 1) 00:10:31.496 19.532 - 19.627: 99.8664% ( 1) 00:10:31.496 22.566 - 22.661: 99.8738% ( 1) 00:10:31.496 22.945 - 23.040: 99.8813% ( 1) 00:10:31.496 23.609 - 23.704: 99.8887% ( 1) 00:10:31.496 23.893 - 23.988: 99.8961% ( 1) 00:10:31.496 25.410 - 25.600: 99.9035% ( 1) 00:10:31.496 25.790 - 25.979: 99.9109% ( 1) 00:10:31.496 2026.761 - 2038.898: 99.9184% ( 1) 00:10:31.496 3980.705 - 4004.978: 99.9926% ( 10) 00:10:31.496 4004.978 - 4029.250: 100.0000% ( 1) 00:10:31.496 00:10:31.496 Complete histogram 00:10:31.496 ================== 00:10:31.496 Range in us Cumulative Count 00:10:31.496 2.062 - 2.074: 9.1948% ( 1239) 00:10:31.496 2.074 - 2.086: 26.4861% ( 2330) 00:10:31.496 2.086 - 2.098: 28.5492% ( 278) 00:10:31.496 2.098 - 2.110: 47.9703% ( 2617) 00:10:31.496 2.110 - 2.121: 57.9592% ( 1346) 00:10:31.496 2.121 - 2.133: 59.3024% ( 181) 00:10:31.496 2.133 - 2.145: 67.1837% ( 1062) 00:10:31.496 2.145 - 2.157: 72.2301% ( 680) 00:10:31.496 2.157 - 2.169: 74.1447% ( 258) 00:10:31.496 2.169 - 2.181: 83.0501% ( 1200) 00:10:31.496 2.181 - 2.193: 86.6419% ( 484) 00:10:31.497 2.193 - 2.204: 87.5473% ( 122) 00:10:31.497 2.204 - 2.216: 89.4471% ( 256) 00:10:31.497 2.216 - 2.228: 90.7384% ( 174) 00:10:31.497 2.228 - 2.240: 91.9109% ( 158) 00:10:31.497 2.240 - 2.252: 93.4545% ( 208) 00:10:31.497 2.252 - 2.264: 94.5158% ( 143) 00:10:31.497 2.264 - 2.276: 95.0575% ( 73) 00:10:31.497 2.276 - 2.287: 95.4286% ( 50) 00:10:31.497 2.287 - 2.299: 95.6957% ( 36) 00:10:31.497 2.299 - 2.311: 95.9555% ( 35) 00:10:31.497 2.311 - 2.323: 96.0816% ( 17) 00:10:31.497 2.323 - 2.335: 96.1558% ( 10) 00:10:31.497 2.335 - 2.347: 96.2449% ( 12) 00:10:31.497 2.347 - 2.359: 96.2597% ( 2) 00:10:31.497 2.359 - 2.370: 96.2968% ( 5) 00:10:31.497 2.370 - 2.382: 96.3859% ( 12) 00:10:31.497 2.382 - 2.394: 96.4898% ( 14) 00:10:31.497 2.394 - 2.406: 96.6605% ( 23) 00:10:31.497 2.406 - 2.418: 96.8237% ( 22) 00:10:31.497 2.418 - 2.430: 97.1206% ( 40) 00:10:31.497 2.430 - 2.441: 97.4100% ( 39) 00:10:31.497 2.441 - 2.453: 97.5881% ( 24) 00:10:31.497 2.453 - 2.465: 97.7737% ( 25) 00:10:31.497 2.465 - 2.477: 98.0111% ( 32) 00:10:31.497 2.477 - 2.489: 98.2115% ( 27) 00:10:31.497 2.489 - 2.501: 98.2857% ( 10) 00:10:31.497 2.501 - 2.513: 98.3673% ( 11) 00:10:31.497 2.513 - 2.524: 98.4267% ( 8) 00:10:31.497 2.524 - 2.536: 98.4564% ( 4) 00:10:31.497 2.536 - 2.548: 98.4638% ( 1) 00:10:31.497 2.560 - 2.572: 98.4861% ( 3) 00:10:31.497 2.596 - 2.607: 98.4935% ( 1) 00:10:31.497 2.631 - 2.643: 98.5009% ( 1) 00:10:31.497 2.667 - 2.679: 98.5083% ( 1) 00:10:31.497 2.679 - 2.690: 98.5158% ( 1) 00:10:31.497 2.690 - 2.702: 98.5306% ( 2) 00:10:31.497 2.702 - 2.714: 98.5380% ( 1) 00:10:31.497 2.761 - 2.773: 98.5455% ( 1) 00:10:31.497 3.176 - 3.200: 98.5529% ( 1) 00:10:31.497 3.247 - 3.271: 98.5603% ( 1) 00:10:31.497 3.295 - 3.319: 98.5677% ( 1) 00:10:31.497 3.319 - 3.342: 98.5751% ( 1) 00:10:31.497 3.342 - 3.366: 98.5900% ( 2) 00:10:31.497 3.366 - 3.390: 98.6122% ( 3) 00:10:31.497 3.390 - 3.413: 98.6271% ( 2) 00:10:31.497 3.413 - 3.437: 98.6345% ( 1) 00:10:31.497 3.437 - 3.461: 98.6494% ( 2) 00:10:31.497 3.461 - 3.484: 98.6790% ( 4) 00:10:31.497 3.484 - 3.508: 98.6865% ( 1) 00:10:31.497 3.508 - 3.532: 98.7087% ( 3) 00:10:31.497 3.532 - 3.556: 98.7161% ( 1) 00:10:31.497 3.556 - 3.579: 98.7310% ( 2) 00:10:31.497 3.579 - 3.603: 98.7458% ( 2) 00:10:31.497 3.603 - 3.627: 98.7681% ( 3) 00:10:31.497 3.627 - 3.650: 98.7755% ( 1) 00:10:31.497 3.674 - 3.698: 98.7829% ( 1) 00:10:31.497 3.698 - 3.721: 98.7904% ( 1) 00:10:31.497 3.721 - 3.745: 98.7978% ( 1) 00:10:31.497 3.745 - 3.769: 98.8052% ( 1) 00:10:31.497 3.769 - 3.793: 98.8126% ( 1) 00:10:31.497 3.864 - 3.887: 98.8200% ( 1) 00:10:31.497 3.887 - 3.911: 98.8275% ( 1) 00:10:31.497 3.959 - 3.982: 98.8349% ( 1) 00:10:31.497 4.101 - 4.124: 98.8423% ( 1) 00:10:31.497 5.357 - 5.381: 98.8497% ( 1) 00:10:31.497 5.452 - 5.476: 98.8571% ( 1) 00:10:31.497 5.523 - 5.547: 98.8646% ( 1) 00:10:31.497 5.926 - 5.950: 98.8720% ( 1) 00:10:31.497 5.950 - 5.973: 98.8794% ( 1) 00:10:31.497 6.068 - 6.116: 98.8868% ( 1) 00:10:31.497 6.258 - 6.305: 98.8942% ( 1) 00:10:31.497 6.305 - 6.353: 98.9017% ( 1) 00:10:31.497 6.353 - 6.400: 98.9091% ( 1) 00:10:31.497 6.447 - 6.495: 98.9165% ( 1) 00:10:31.497 6.495 - 6.542: 98.9239% ( 1) 00:10:31.497 6.874 - 6.921: 98.9388% ( 2) 00:10:31.497 6.969 - 7.016: 98.9462% ( 1) 00:10:31.497 7.064 - 7.111: 9[2024-07-16 00:47:06.166386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:31.497 8.9610% ( 2) 00:10:31.497 7.159 - 7.206: 98.9759% ( 2) 00:10:31.497 7.396 - 7.443: 98.9833% ( 1) 00:10:31.497 7.490 - 7.538: 98.9907% ( 1) 00:10:31.497 7.964 - 8.012: 98.9981% ( 1) 00:10:31.497 8.107 - 8.154: 99.0056% ( 1) 00:10:31.497 9.197 - 9.244: 99.0130% ( 1) 00:10:31.497 9.719 - 9.766: 99.0204% ( 1) 00:10:31.497 15.550 - 15.644: 99.0353% ( 2) 00:10:31.497 15.739 - 15.834: 99.0798% ( 6) 00:10:31.497 15.834 - 15.929: 99.1095% ( 4) 00:10:31.497 15.929 - 16.024: 99.1169% ( 1) 00:10:31.497 16.024 - 16.119: 99.1540% ( 5) 00:10:31.497 16.119 - 16.213: 99.1763% ( 3) 00:10:31.497 16.213 - 16.308: 99.1985% ( 3) 00:10:31.497 16.308 - 16.403: 99.2282% ( 4) 00:10:31.497 16.403 - 16.498: 99.2727% ( 6) 00:10:31.497 16.498 - 16.593: 99.2801% ( 1) 00:10:31.497 16.593 - 16.687: 99.2950% ( 2) 00:10:31.497 16.687 - 16.782: 99.3247% ( 4) 00:10:31.497 16.782 - 16.877: 99.3321% ( 1) 00:10:31.497 16.877 - 16.972: 99.3395% ( 1) 00:10:31.497 16.972 - 17.067: 99.3544% ( 2) 00:10:31.497 17.067 - 17.161: 99.3840% ( 4) 00:10:31.497 17.161 - 17.256: 99.3989% ( 2) 00:10:31.497 17.256 - 17.351: 99.4212% ( 3) 00:10:31.497 17.351 - 17.446: 99.4286% ( 1) 00:10:31.497 17.541 - 17.636: 99.4360% ( 1) 00:10:31.497 2014.625 - 2026.761: 99.4434% ( 1) 00:10:31.497 2099.579 - 2111.716: 99.4508% ( 1) 00:10:31.497 2997.665 - 3009.801: 99.4583% ( 1) 00:10:31.497 3980.705 - 4004.978: 99.9332% ( 64) 00:10:31.497 4004.978 - 4029.250: 99.9703% ( 5) 00:10:31.497 4975.881 - 5000.154: 99.9852% ( 2) 00:10:31.497 5000.154 - 5024.427: 99.9926% ( 1) 00:10:31.497 5971.058 - 5995.330: 100.0000% ( 1) 00:10:31.497 00:10:31.497 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:31.497 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:31.497 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:31.497 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:31.497 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:31.755 [ 00:10:31.755 { 00:10:31.755 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:31.755 "subtype": "Discovery", 00:10:31.755 "listen_addresses": [], 00:10:31.755 "allow_any_host": true, 00:10:31.755 "hosts": [] 00:10:31.755 }, 00:10:31.755 { 00:10:31.755 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:31.755 "subtype": "NVMe", 00:10:31.755 "listen_addresses": [ 00:10:31.755 { 00:10:31.755 "trtype": "VFIOUSER", 00:10:31.755 "adrfam": "IPv4", 00:10:31.755 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:31.755 "trsvcid": "0" 00:10:31.755 } 00:10:31.755 ], 00:10:31.755 "allow_any_host": true, 00:10:31.755 "hosts": [], 00:10:31.755 "serial_number": "SPDK1", 00:10:31.755 "model_number": "SPDK bdev Controller", 00:10:31.755 "max_namespaces": 32, 00:10:31.755 "min_cntlid": 1, 00:10:31.755 "max_cntlid": 65519, 00:10:31.755 "namespaces": [ 00:10:31.755 { 00:10:31.755 "nsid": 1, 00:10:31.755 "bdev_name": "Malloc1", 00:10:31.755 "name": "Malloc1", 00:10:31.755 "nguid": "8A2A5C211F4347699D7EFF3B0EC5D315", 00:10:31.755 "uuid": "8a2a5c21-1f43-4769-9d7e-ff3b0ec5d315" 00:10:31.755 } 00:10:31.755 ] 00:10:31.755 }, 00:10:31.755 { 00:10:31.755 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:31.755 "subtype": "NVMe", 00:10:31.755 "listen_addresses": [ 00:10:31.755 { 00:10:31.755 "trtype": "VFIOUSER", 00:10:31.755 "adrfam": "IPv4", 00:10:31.755 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:31.755 "trsvcid": "0" 00:10:31.755 } 00:10:31.755 ], 00:10:31.755 "allow_any_host": true, 00:10:31.755 "hosts": [], 00:10:31.755 "serial_number": "SPDK2", 00:10:31.755 "model_number": "SPDK bdev Controller", 00:10:31.755 "max_namespaces": 32, 00:10:31.755 "min_cntlid": 1, 00:10:31.755 "max_cntlid": 65519, 00:10:31.755 "namespaces": [ 00:10:31.755 { 00:10:31.755 "nsid": 1, 00:10:31.755 "bdev_name": "Malloc2", 00:10:31.755 "name": "Malloc2", 00:10:31.755 "nguid": "881FC6DED65949798B5C020943F01219", 00:10:31.755 "uuid": "881fc6de-d659-4979-8b5c-020943f01219" 00:10:31.755 } 00:10:31.755 ] 00:10:31.755 } 00:10:31.755 ] 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2588679 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:32.013 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:32.013 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.013 [2024-07-16 00:47:06.669381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:32.271 Malloc3 00:10:32.271 00:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:32.529 [2024-07-16 00:47:07.040051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:32.529 00:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:32.529 Asynchronous Event Request test 00:10:32.529 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.529 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.529 Registering asynchronous event callbacks... 00:10:32.529 Starting namespace attribute notice tests for all controllers... 00:10:32.529 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:32.529 aer_cb - Changed Namespace 00:10:32.529 Cleaning up... 00:10:32.790 [ 00:10:32.790 { 00:10:32.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:32.790 "subtype": "Discovery", 00:10:32.790 "listen_addresses": [], 00:10:32.790 "allow_any_host": true, 00:10:32.790 "hosts": [] 00:10:32.790 }, 00:10:32.790 { 00:10:32.790 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:32.790 "subtype": "NVMe", 00:10:32.790 "listen_addresses": [ 00:10:32.790 { 00:10:32.790 "trtype": "VFIOUSER", 00:10:32.790 "adrfam": "IPv4", 00:10:32.790 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:32.790 "trsvcid": "0" 00:10:32.790 } 00:10:32.790 ], 00:10:32.790 "allow_any_host": true, 00:10:32.790 "hosts": [], 00:10:32.791 "serial_number": "SPDK1", 00:10:32.791 "model_number": "SPDK bdev Controller", 00:10:32.791 "max_namespaces": 32, 00:10:32.791 "min_cntlid": 1, 00:10:32.791 "max_cntlid": 65519, 00:10:32.791 "namespaces": [ 00:10:32.791 { 00:10:32.791 "nsid": 1, 00:10:32.791 "bdev_name": "Malloc1", 00:10:32.791 "name": "Malloc1", 00:10:32.791 "nguid": "8A2A5C211F4347699D7EFF3B0EC5D315", 00:10:32.791 "uuid": "8a2a5c21-1f43-4769-9d7e-ff3b0ec5d315" 00:10:32.791 }, 00:10:32.791 { 00:10:32.791 "nsid": 2, 00:10:32.791 "bdev_name": "Malloc3", 00:10:32.791 "name": "Malloc3", 00:10:32.791 "nguid": "F3EE656440B24B64A52FF832F94A06C3", 00:10:32.791 "uuid": "f3ee6564-40b2-4b64-a52f-f832f94a06c3" 00:10:32.791 } 00:10:32.791 ] 00:10:32.791 }, 00:10:32.791 { 00:10:32.791 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:32.791 "subtype": "NVMe", 00:10:32.791 "listen_addresses": [ 00:10:32.791 { 00:10:32.791 "trtype": "VFIOUSER", 00:10:32.791 "adrfam": "IPv4", 00:10:32.791 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:32.791 "trsvcid": "0" 00:10:32.791 } 00:10:32.791 ], 00:10:32.791 "allow_any_host": true, 00:10:32.791 "hosts": [], 00:10:32.791 "serial_number": "SPDK2", 00:10:32.791 "model_number": "SPDK bdev Controller", 00:10:32.791 "max_namespaces": 32, 00:10:32.791 "min_cntlid": 1, 00:10:32.791 "max_cntlid": 65519, 00:10:32.791 "namespaces": [ 00:10:32.791 { 00:10:32.791 "nsid": 1, 00:10:32.791 "bdev_name": "Malloc2", 00:10:32.791 "name": "Malloc2", 00:10:32.791 "nguid": "881FC6DED65949798B5C020943F01219", 00:10:32.791 "uuid": "881fc6de-d659-4979-8b5c-020943f01219" 00:10:32.791 } 00:10:32.791 ] 00:10:32.791 } 00:10:32.791 ] 00:10:32.791 00:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2588679 00:10:32.791 00:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:32.791 00:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:32.791 00:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:32.791 00:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:32.791 [2024-07-16 00:47:07.317121] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:10:32.791 [2024-07-16 00:47:07.317157] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588810 ] 00:10:32.791 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.791 [2024-07-16 00:47:07.350830] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:32.791 [2024-07-16 00:47:07.359194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:32.791 [2024-07-16 00:47:07.359222] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f01d78d2000 00:10:32.791 [2024-07-16 00:47:07.360196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.361201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.362208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.363212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.364218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.365226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.366232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.367239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:32.791 [2024-07-16 00:47:07.368247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:32.791 [2024-07-16 00:47:07.368268] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f01d78c7000 00:10:32.791 [2024-07-16 00:47:07.369422] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:32.791 [2024-07-16 00:47:07.386110] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:32.791 [2024-07-16 00:47:07.386146] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:32.791 [2024-07-16 00:47:07.388260] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:32.791 [2024-07-16 00:47:07.388314] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:32.791 [2024-07-16 00:47:07.388405] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:32.791 [2024-07-16 00:47:07.388428] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:32.791 [2024-07-16 00:47:07.388439] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:32.791 [2024-07-16 00:47:07.389264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:32.791 [2024-07-16 00:47:07.389289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:32.791 [2024-07-16 00:47:07.389303] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:32.791 [2024-07-16 00:47:07.390269] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:32.791 [2024-07-16 00:47:07.390289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:32.791 [2024-07-16 00:47:07.390303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:32.791 [2024-07-16 00:47:07.391276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:32.791 [2024-07-16 00:47:07.391297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:32.791 [2024-07-16 00:47:07.392284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:32.791 [2024-07-16 00:47:07.392304] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:32.791 [2024-07-16 00:47:07.392313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:32.791 [2024-07-16 00:47:07.392325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:32.792 [2024-07-16 00:47:07.392434] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:32.792 [2024-07-16 00:47:07.392442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:32.792 [2024-07-16 00:47:07.392450] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:32.792 [2024-07-16 00:47:07.393308] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:32.792 [2024-07-16 00:47:07.394302] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:32.792 [2024-07-16 00:47:07.395307] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:32.792 [2024-07-16 00:47:07.396302] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:32.792 [2024-07-16 00:47:07.396377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:32.792 [2024-07-16 00:47:07.397323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:32.792 [2024-07-16 00:47:07.397343] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:32.792 [2024-07-16 00:47:07.397352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.397375] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:32.792 [2024-07-16 00:47:07.397391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.397417] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:32.792 [2024-07-16 00:47:07.397427] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.792 [2024-07-16 00:47:07.397446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.792 [2024-07-16 00:47:07.403891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:32.792 [2024-07-16 00:47:07.403914] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:32.792 [2024-07-16 00:47:07.403923] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:32.792 [2024-07-16 00:47:07.403931] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:32.792 [2024-07-16 00:47:07.403939] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:32.792 [2024-07-16 00:47:07.403946] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:32.792 [2024-07-16 00:47:07.403955] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:32.792 [2024-07-16 00:47:07.403962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.403976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.403996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:32.792 [2024-07-16 00:47:07.411892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:32.792 [2024-07-16 00:47:07.411914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.792 [2024-07-16 00:47:07.411927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.792 [2024-07-16 00:47:07.411939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.792 [2024-07-16 00:47:07.411951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.792 [2024-07-16 00:47:07.411959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.411975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.411991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:32.792 [2024-07-16 00:47:07.419902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:32.792 [2024-07-16 00:47:07.419920] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:32.792 [2024-07-16 00:47:07.419930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.419945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.419959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.419974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:32.792 [2024-07-16 00:47:07.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:32.792 [2024-07-16 00:47:07.427964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.427981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.427994] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:32.792 [2024-07-16 00:47:07.428003] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:32.792 [2024-07-16 00:47:07.428012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:32.792 [2024-07-16 00:47:07.435887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:32.792 [2024-07-16 00:47:07.435909] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:32.792 [2024-07-16 00:47:07.435927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.435942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:32.792 [2024-07-16 00:47:07.435954] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:32.792 [2024-07-16 00:47:07.435963] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.792 [2024-07-16 00:47:07.435973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.443887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.443917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.443933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.443947] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:32.793 [2024-07-16 00:47:07.443956] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.793 [2024-07-16 00:47:07.443966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.451888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.451909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.451922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.451936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.451947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.451959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.451968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.451977] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:32.793 [2024-07-16 00:47:07.451984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:32.793 [2024-07-16 00:47:07.451992] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:32.793 [2024-07-16 00:47:07.452018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.459888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.459915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.467890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.467917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.475897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.475923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.483890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.483922] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:32.793 [2024-07-16 00:47:07.483933] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:32.793 [2024-07-16 00:47:07.483939] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:32.793 [2024-07-16 00:47:07.483945] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:32.793 [2024-07-16 00:47:07.483955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:32.793 [2024-07-16 00:47:07.483967] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:32.793 [2024-07-16 00:47:07.483975] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:32.793 [2024-07-16 00:47:07.483985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.483996] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:32.793 [2024-07-16 00:47:07.484004] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:32.793 [2024-07-16 00:47:07.484013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.484025] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:32.793 [2024-07-16 00:47:07.484033] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:32.793 [2024-07-16 00:47:07.484042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:32.793 [2024-07-16 00:47:07.491901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.491928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.491946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:32.793 [2024-07-16 00:47:07.491958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:32.793 ===================================================== 00:10:32.793 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:32.793 ===================================================== 00:10:32.793 Controller Capabilities/Features 00:10:32.793 ================================ 00:10:32.793 Vendor ID: 4e58 00:10:32.793 Subsystem Vendor ID: 4e58 00:10:32.793 Serial Number: SPDK2 00:10:32.793 Model Number: SPDK bdev Controller 00:10:32.793 Firmware Version: 24.09 00:10:32.793 Recommended Arb Burst: 6 00:10:32.793 IEEE OUI Identifier: 8d 6b 50 00:10:32.793 Multi-path I/O 00:10:32.793 May have multiple subsystem ports: Yes 00:10:32.793 May have multiple controllers: Yes 00:10:32.793 Associated with SR-IOV VF: No 00:10:32.793 Max Data Transfer Size: 131072 00:10:32.793 Max Number of Namespaces: 32 00:10:32.793 Max Number of I/O Queues: 127 00:10:32.793 NVMe Specification Version (VS): 1.3 00:10:32.793 NVMe Specification Version (Identify): 1.3 00:10:32.793 Maximum Queue Entries: 256 00:10:32.793 Contiguous Queues Required: Yes 00:10:32.793 Arbitration Mechanisms Supported 00:10:32.793 Weighted Round Robin: Not Supported 00:10:32.793 Vendor Specific: Not Supported 00:10:32.793 Reset Timeout: 15000 ms 00:10:32.793 Doorbell Stride: 4 bytes 00:10:32.793 NVM Subsystem Reset: Not Supported 00:10:32.793 Command Sets Supported 00:10:32.793 NVM Command Set: Supported 00:10:32.793 Boot Partition: Not Supported 00:10:32.793 Memory Page Size Minimum: 4096 bytes 00:10:32.793 Memory Page Size Maximum: 4096 bytes 00:10:32.793 Persistent Memory Region: Not Supported 00:10:32.793 Optional Asynchronous Events Supported 00:10:32.793 Namespace Attribute Notices: Supported 00:10:32.793 Firmware Activation Notices: Not Supported 00:10:32.793 ANA Change Notices: Not Supported 00:10:32.793 PLE Aggregate Log Change Notices: Not Supported 00:10:32.793 LBA Status Info Alert Notices: Not Supported 00:10:32.793 EGE Aggregate Log Change Notices: Not Supported 00:10:32.793 Normal NVM Subsystem Shutdown event: Not Supported 00:10:32.793 Zone Descriptor Change Notices: Not Supported 00:10:32.793 Discovery Log Change Notices: Not Supported 00:10:32.793 Controller Attributes 00:10:32.794 128-bit Host Identifier: Supported 00:10:32.794 Non-Operational Permissive Mode: Not Supported 00:10:32.794 NVM Sets: Not Supported 00:10:32.794 Read Recovery Levels: Not Supported 00:10:32.794 Endurance Groups: Not Supported 00:10:32.794 Predictable Latency Mode: Not Supported 00:10:32.794 Traffic Based Keep ALive: Not Supported 00:10:32.794 Namespace Granularity: Not Supported 00:10:32.794 SQ Associations: Not Supported 00:10:32.794 UUID List: Not Supported 00:10:32.794 Multi-Domain Subsystem: Not Supported 00:10:32.794 Fixed Capacity Management: Not Supported 00:10:32.794 Variable Capacity Management: Not Supported 00:10:32.794 Delete Endurance Group: Not Supported 00:10:32.794 Delete NVM Set: Not Supported 00:10:32.794 Extended LBA Formats Supported: Not Supported 00:10:32.794 Flexible Data Placement Supported: Not Supported 00:10:32.794 00:10:32.794 Controller Memory Buffer Support 00:10:32.794 ================================ 00:10:32.794 Supported: No 00:10:32.794 00:10:32.794 Persistent Memory Region Support 00:10:32.794 ================================ 00:10:32.794 Supported: No 00:10:32.794 00:10:32.794 Admin Command Set Attributes 00:10:32.794 ============================ 00:10:32.794 Security Send/Receive: Not Supported 00:10:32.794 Format NVM: Not Supported 00:10:32.794 Firmware Activate/Download: Not Supported 00:10:32.794 Namespace Management: Not Supported 00:10:32.794 Device Self-Test: Not Supported 00:10:32.794 Directives: Not Supported 00:10:32.794 NVMe-MI: Not Supported 00:10:32.794 Virtualization Management: Not Supported 00:10:32.794 Doorbell Buffer Config: Not Supported 00:10:32.794 Get LBA Status Capability: Not Supported 00:10:32.794 Command & Feature Lockdown Capability: Not Supported 00:10:32.794 Abort Command Limit: 4 00:10:32.794 Async Event Request Limit: 4 00:10:32.794 Number of Firmware Slots: N/A 00:10:32.794 Firmware Slot 1 Read-Only: N/A 00:10:32.794 Firmware Activation Without Reset: N/A 00:10:32.794 Multiple Update Detection Support: N/A 00:10:32.794 Firmware Update Granularity: No Information Provided 00:10:32.794 Per-Namespace SMART Log: No 00:10:32.794 Asymmetric Namespace Access Log Page: Not Supported 00:10:32.794 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:32.794 Command Effects Log Page: Supported 00:10:32.794 Get Log Page Extended Data: Supported 00:10:32.794 Telemetry Log Pages: Not Supported 00:10:32.794 Persistent Event Log Pages: Not Supported 00:10:32.794 Supported Log Pages Log Page: May Support 00:10:32.794 Commands Supported & Effects Log Page: Not Supported 00:10:32.794 Feature Identifiers & Effects Log Page:May Support 00:10:32.794 NVMe-MI Commands & Effects Log Page: May Support 00:10:32.794 Data Area 4 for Telemetry Log: Not Supported 00:10:32.794 Error Log Page Entries Supported: 128 00:10:32.794 Keep Alive: Supported 00:10:32.794 Keep Alive Granularity: 10000 ms 00:10:32.794 00:10:32.794 NVM Command Set Attributes 00:10:32.794 ========================== 00:10:32.794 Submission Queue Entry Size 00:10:32.794 Max: 64 00:10:32.794 Min: 64 00:10:32.794 Completion Queue Entry Size 00:10:32.794 Max: 16 00:10:32.794 Min: 16 00:10:32.794 Number of Namespaces: 32 00:10:32.794 Compare Command: Supported 00:10:32.794 Write Uncorrectable Command: Not Supported 00:10:32.794 Dataset Management Command: Supported 00:10:32.794 Write Zeroes Command: Supported 00:10:32.794 Set Features Save Field: Not Supported 00:10:32.794 Reservations: Not Supported 00:10:32.794 Timestamp: Not Supported 00:10:32.794 Copy: Supported 00:10:32.794 Volatile Write Cache: Present 00:10:32.794 Atomic Write Unit (Normal): 1 00:10:32.794 Atomic Write Unit (PFail): 1 00:10:32.794 Atomic Compare & Write Unit: 1 00:10:32.794 Fused Compare & Write: Supported 00:10:32.794 Scatter-Gather List 00:10:32.794 SGL Command Set: Supported (Dword aligned) 00:10:32.794 SGL Keyed: Not Supported 00:10:32.794 SGL Bit Bucket Descriptor: Not Supported 00:10:32.794 SGL Metadata Pointer: Not Supported 00:10:32.794 Oversized SGL: Not Supported 00:10:32.794 SGL Metadata Address: Not Supported 00:10:32.794 SGL Offset: Not Supported 00:10:32.794 Transport SGL Data Block: Not Supported 00:10:32.794 Replay Protected Memory Block: Not Supported 00:10:32.794 00:10:32.794 Firmware Slot Information 00:10:32.794 ========================= 00:10:32.794 Active slot: 1 00:10:32.794 Slot 1 Firmware Revision: 24.09 00:10:32.794 00:10:32.794 00:10:32.794 Commands Supported and Effects 00:10:32.794 ============================== 00:10:32.794 Admin Commands 00:10:32.794 -------------- 00:10:32.794 Get Log Page (02h): Supported 00:10:32.794 Identify (06h): Supported 00:10:32.794 Abort (08h): Supported 00:10:32.794 Set Features (09h): Supported 00:10:32.794 Get Features (0Ah): Supported 00:10:32.794 Asynchronous Event Request (0Ch): Supported 00:10:32.794 Keep Alive (18h): Supported 00:10:32.794 I/O Commands 00:10:32.794 ------------ 00:10:32.794 Flush (00h): Supported LBA-Change 00:10:32.794 Write (01h): Supported LBA-Change 00:10:32.794 Read (02h): Supported 00:10:32.794 Compare (05h): Supported 00:10:32.794 Write Zeroes (08h): Supported LBA-Change 00:10:32.794 Dataset Management (09h): Supported LBA-Change 00:10:32.795 Copy (19h): Supported LBA-Change 00:10:32.795 00:10:32.795 Error Log 00:10:32.795 ========= 00:10:32.795 00:10:32.795 Arbitration 00:10:32.795 =========== 00:10:32.795 Arbitration Burst: 1 00:10:32.795 00:10:32.795 Power Management 00:10:32.795 ================ 00:10:32.795 Number of Power States: 1 00:10:32.795 Current Power State: Power State #0 00:10:32.795 Power State #0: 00:10:32.795 Max Power: 0.00 W 00:10:32.795 Non-Operational State: Operational 00:10:32.795 Entry Latency: Not Reported 00:10:32.795 Exit Latency: Not Reported 00:10:32.795 Relative Read Throughput: 0 00:10:32.795 Relative Read Latency: 0 00:10:32.795 Relative Write Throughput: 0 00:10:32.795 Relative Write Latency: 0 00:10:32.795 Idle Power: Not Reported 00:10:32.795 Active Power: Not Reported 00:10:32.795 Non-Operational Permissive Mode: Not Supported 00:10:32.795 00:10:32.795 Health Information 00:10:32.795 ================== 00:10:32.795 Critical Warnings: 00:10:32.795 Available Spare Space: OK 00:10:32.795 Temperature: OK 00:10:32.795 Device Reliability: OK 00:10:32.795 Read Only: No 00:10:32.795 Volatile Memory Backup: OK 00:10:32.795 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:32.795 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:32.795 Available Spare: 0% 00:10:32.795 Available Sp[2024-07-16 00:47:07.492078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:32.795 [2024-07-16 00:47:07.499892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:32.795 [2024-07-16 00:47:07.499958] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:32.795 [2024-07-16 00:47:07.499976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.795 [2024-07-16 00:47:07.499987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.795 [2024-07-16 00:47:07.499997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.795 [2024-07-16 00:47:07.500007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.795 [2024-07-16 00:47:07.500072] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:32.795 [2024-07-16 00:47:07.500093] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:32.795 [2024-07-16 00:47:07.501072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:32.795 [2024-07-16 00:47:07.503899] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:32.795 [2024-07-16 00:47:07.503914] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:32.795 [2024-07-16 00:47:07.504090] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:32.795 [2024-07-16 00:47:07.504113] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:32.795 [2024-07-16 00:47:07.504165] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:32.795 [2024-07-16 00:47:07.505363] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:33.053 are Threshold: 0% 00:10:33.053 Life Percentage Used: 0% 00:10:33.053 Data Units Read: 0 00:10:33.053 Data Units Written: 0 00:10:33.053 Host Read Commands: 0 00:10:33.053 Host Write Commands: 0 00:10:33.053 Controller Busy Time: 0 minutes 00:10:33.053 Power Cycles: 0 00:10:33.053 Power On Hours: 0 hours 00:10:33.053 Unsafe Shutdowns: 0 00:10:33.053 Unrecoverable Media Errors: 0 00:10:33.053 Lifetime Error Log Entries: 0 00:10:33.053 Warning Temperature Time: 0 minutes 00:10:33.053 Critical Temperature Time: 0 minutes 00:10:33.053 00:10:33.053 Number of Queues 00:10:33.053 ================ 00:10:33.053 Number of I/O Submission Queues: 127 00:10:33.053 Number of I/O Completion Queues: 127 00:10:33.053 00:10:33.053 Active Namespaces 00:10:33.053 ================= 00:10:33.053 Namespace ID:1 00:10:33.053 Error Recovery Timeout: Unlimited 00:10:33.053 Command Set Identifier: NVM (00h) 00:10:33.053 Deallocate: Supported 00:10:33.053 Deallocated/Unwritten Error: Not Supported 00:10:33.053 Deallocated Read Value: Unknown 00:10:33.053 Deallocate in Write Zeroes: Not Supported 00:10:33.053 Deallocated Guard Field: 0xFFFF 00:10:33.053 Flush: Supported 00:10:33.053 Reservation: Supported 00:10:33.053 Namespace Sharing Capabilities: Multiple Controllers 00:10:33.053 Size (in LBAs): 131072 (0GiB) 00:10:33.053 Capacity (in LBAs): 131072 (0GiB) 00:10:33.053 Utilization (in LBAs): 131072 (0GiB) 00:10:33.053 NGUID: 881FC6DED65949798B5C020943F01219 00:10:33.053 UUID: 881fc6de-d659-4979-8b5c-020943f01219 00:10:33.053 Thin Provisioning: Not Supported 00:10:33.053 Per-NS Atomic Units: Yes 00:10:33.053 Atomic Boundary Size (Normal): 0 00:10:33.053 Atomic Boundary Size (PFail): 0 00:10:33.053 Atomic Boundary Offset: 0 00:10:33.053 Maximum Single Source Range Length: 65535 00:10:33.053 Maximum Copy Length: 65535 00:10:33.053 Maximum Source Range Count: 1 00:10:33.053 NGUID/EUI64 Never Reused: No 00:10:33.053 Namespace Write Protected: No 00:10:33.053 Number of LBA Formats: 1 00:10:33.053 Current LBA Format: LBA Format #00 00:10:33.053 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:33.053 00:10:33.053 00:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:33.053 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.053 [2024-07-16 00:47:07.734750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:38.318 Initializing NVMe Controllers 00:10:38.318 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:38.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:38.318 Initialization complete. Launching workers. 00:10:38.318 ======================================================== 00:10:38.318 Latency(us) 00:10:38.318 Device Information : IOPS MiB/s Average min max 00:10:38.318 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34493.29 134.74 3710.27 1175.76 8342.23 00:10:38.318 ======================================================== 00:10:38.318 Total : 34493.29 134.74 3710.27 1175.76 8342.23 00:10:38.318 00:10:38.318 [2024-07-16 00:47:12.843230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:38.318 00:47:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:38.318 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.578 [2024-07-16 00:47:13.091936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:43.900 Initializing NVMe Controllers 00:10:43.900 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:43.900 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:43.900 Initialization complete. Launching workers. 00:10:43.900 ======================================================== 00:10:43.900 Latency(us) 00:10:43.900 Device Information : IOPS MiB/s Average min max 00:10:43.901 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32239.40 125.94 3970.79 1215.55 9018.44 00:10:43.901 ======================================================== 00:10:43.901 Total : 32239.40 125.94 3970.79 1215.55 9018.44 00:10:43.901 00:10:43.901 [2024-07-16 00:47:18.118761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:43.901 00:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:43.901 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.901 [2024-07-16 00:47:18.315532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:49.205 [2024-07-16 00:47:23.449022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:49.206 Initializing NVMe Controllers 00:10:49.206 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:49.206 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:49.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:49.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:49.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:49.206 Initialization complete. Launching workers. 00:10:49.206 Starting thread on core 2 00:10:49.206 Starting thread on core 3 00:10:49.206 Starting thread on core 1 00:10:49.206 00:47:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:49.206 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.206 [2024-07-16 00:47:23.756381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:52.500 [2024-07-16 00:47:26.815963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:52.500 Initializing NVMe Controllers 00:10:52.500 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.500 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.500 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:52.500 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:52.500 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:52.500 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:52.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:52.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:52.500 Initialization complete. Launching workers. 00:10:52.500 Starting thread on core 1 with urgent priority queue 00:10:52.500 Starting thread on core 2 with urgent priority queue 00:10:52.500 Starting thread on core 3 with urgent priority queue 00:10:52.500 Starting thread on core 0 with urgent priority queue 00:10:52.500 SPDK bdev Controller (SPDK2 ) core 0: 5535.33 IO/s 18.07 secs/100000 ios 00:10:52.500 SPDK bdev Controller (SPDK2 ) core 1: 4852.33 IO/s 20.61 secs/100000 ios 00:10:52.500 SPDK bdev Controller (SPDK2 ) core 2: 5235.33 IO/s 19.10 secs/100000 ios 00:10:52.500 SPDK bdev Controller (SPDK2 ) core 3: 5124.67 IO/s 19.51 secs/100000 ios 00:10:52.500 ======================================================== 00:10:52.500 00:10:52.500 00:47:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:52.500 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.500 [2024-07-16 00:47:27.122351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:52.500 Initializing NVMe Controllers 00:10:52.500 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.500 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:52.500 Namespace ID: 1 size: 0GB 00:10:52.500 Initialization complete. 00:10:52.500 INFO: using host memory buffer for IO 00:10:52.500 Hello world! 00:10:52.500 [2024-07-16 00:47:27.135520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:52.500 00:47:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:52.500 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.758 [2024-07-16 00:47:27.419304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:54.139 Initializing NVMe Controllers 00:10:54.139 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.139 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.139 Initialization complete. Launching workers. 00:10:54.139 submit (in ns) avg, min, max = 7612.4, 3514.4, 4016144.4 00:10:54.139 complete (in ns) avg, min, max = 27572.0, 2061.1, 6993356.7 00:10:54.139 00:10:54.139 Submit histogram 00:10:54.139 ================ 00:10:54.139 Range in us Cumulative Count 00:10:54.139 3.508 - 3.532: 0.1907% ( 25) 00:10:54.139 3.532 - 3.556: 0.7779% ( 77) 00:10:54.139 3.556 - 3.579: 2.5549% ( 233) 00:10:54.139 3.579 - 3.603: 6.0555% ( 459) 00:10:54.139 3.603 - 3.627: 11.7678% ( 749) 00:10:54.139 3.627 - 3.650: 19.9436% ( 1072) 00:10:54.139 3.650 - 3.674: 29.0345% ( 1192) 00:10:54.139 3.674 - 3.698: 37.2788% ( 1081) 00:10:54.139 3.698 - 3.721: 45.1724% ( 1035) 00:10:54.139 3.721 - 3.745: 51.2431% ( 796) 00:10:54.139 3.745 - 3.769: 55.9945% ( 623) 00:10:54.139 3.769 - 3.793: 60.5476% ( 597) 00:10:54.139 3.793 - 3.816: 63.7736% ( 423) 00:10:54.139 3.816 - 3.840: 67.5793% ( 499) 00:10:54.139 3.840 - 3.864: 71.3087% ( 489) 00:10:54.139 3.864 - 3.887: 75.1449% ( 503) 00:10:54.139 3.887 - 3.911: 79.3700% ( 554) 00:10:54.139 3.911 - 3.935: 83.0461% ( 482) 00:10:54.139 3.935 - 3.959: 85.6162% ( 337) 00:10:54.139 3.959 - 3.982: 87.5076% ( 248) 00:10:54.139 3.982 - 4.006: 89.2312% ( 226) 00:10:54.139 4.006 - 4.030: 90.4973% ( 166) 00:10:54.139 4.030 - 4.053: 91.6184% ( 147) 00:10:54.139 4.053 - 4.077: 92.6861% ( 140) 00:10:54.139 4.077 - 4.101: 93.5708% ( 116) 00:10:54.139 4.101 - 4.124: 94.4631% ( 117) 00:10:54.139 4.124 - 4.148: 95.0656% ( 79) 00:10:54.139 4.148 - 4.172: 95.5003% ( 57) 00:10:54.139 4.172 - 4.196: 95.9045% ( 53) 00:10:54.139 4.196 - 4.219: 96.2172% ( 41) 00:10:54.139 4.219 - 4.243: 96.4994% ( 37) 00:10:54.139 4.243 - 4.267: 96.6901% ( 25) 00:10:54.139 4.267 - 4.290: 96.8350% ( 19) 00:10:54.139 4.290 - 4.314: 97.0027% ( 22) 00:10:54.139 4.314 - 4.338: 97.1019% ( 13) 00:10:54.139 4.338 - 4.361: 97.1782% ( 10) 00:10:54.139 4.361 - 4.385: 97.1934% ( 2) 00:10:54.139 4.385 - 4.409: 97.2468% ( 7) 00:10:54.139 4.409 - 4.433: 97.2926% ( 6) 00:10:54.139 4.433 - 4.456: 97.3078% ( 2) 00:10:54.139 4.456 - 4.480: 97.3612% ( 7) 00:10:54.139 4.480 - 4.504: 97.3917% ( 4) 00:10:54.139 4.504 - 4.527: 97.4146% ( 3) 00:10:54.139 4.527 - 4.551: 97.4298% ( 2) 00:10:54.139 4.551 - 4.575: 97.4451% ( 2) 00:10:54.139 4.575 - 4.599: 97.4756% ( 4) 00:10:54.139 4.599 - 4.622: 97.4832% ( 1) 00:10:54.139 4.622 - 4.646: 97.4908% ( 1) 00:10:54.139 4.670 - 4.693: 97.4985% ( 1) 00:10:54.139 4.717 - 4.741: 97.5061% ( 1) 00:10:54.139 4.741 - 4.764: 97.5137% ( 1) 00:10:54.139 4.812 - 4.836: 97.5290% ( 2) 00:10:54.139 4.836 - 4.859: 97.5595% ( 4) 00:10:54.139 4.859 - 4.883: 97.5671% ( 1) 00:10:54.139 4.883 - 4.907: 97.5824% ( 2) 00:10:54.139 4.907 - 4.930: 97.6358% ( 7) 00:10:54.139 4.930 - 4.954: 97.7044% ( 9) 00:10:54.139 4.954 - 4.978: 97.7883% ( 11) 00:10:54.139 4.978 - 5.001: 97.8417% ( 7) 00:10:54.139 5.001 - 5.025: 97.9103% ( 9) 00:10:54.139 5.025 - 5.049: 97.9713% ( 8) 00:10:54.139 5.049 - 5.073: 98.0095% ( 5) 00:10:54.139 5.073 - 5.096: 98.0247% ( 2) 00:10:54.139 5.096 - 5.120: 98.0857% ( 8) 00:10:54.139 5.120 - 5.144: 98.1467% ( 8) 00:10:54.139 5.144 - 5.167: 98.1544% ( 1) 00:10:54.139 5.167 - 5.191: 98.1620% ( 1) 00:10:54.139 5.191 - 5.215: 98.1925% ( 4) 00:10:54.139 5.215 - 5.239: 98.2077% ( 2) 00:10:54.139 5.239 - 5.262: 98.2306% ( 3) 00:10:54.139 5.262 - 5.286: 98.2688% ( 5) 00:10:54.139 5.286 - 5.310: 98.2840% ( 2) 00:10:54.139 5.310 - 5.333: 98.2993% ( 2) 00:10:54.139 5.333 - 5.357: 98.3374% ( 5) 00:10:54.139 5.357 - 5.381: 98.3527% ( 2) 00:10:54.139 5.404 - 5.428: 98.3603% ( 1) 00:10:54.139 5.428 - 5.452: 98.3832% ( 3) 00:10:54.139 5.452 - 5.476: 98.3908% ( 1) 00:10:54.139 5.476 - 5.499: 98.3984% ( 1) 00:10:54.139 5.499 - 5.523: 98.4060% ( 1) 00:10:54.139 5.784 - 5.807: 98.4213% ( 2) 00:10:54.139 5.831 - 5.855: 98.4289% ( 1) 00:10:54.139 6.163 - 6.210: 98.4365% ( 1) 00:10:54.139 6.779 - 6.827: 98.4442% ( 1) 00:10:54.139 7.159 - 7.206: 98.4518% ( 1) 00:10:54.139 7.443 - 7.490: 98.4747% ( 3) 00:10:54.139 7.490 - 7.538: 98.4823% ( 1) 00:10:54.139 7.538 - 7.585: 98.4899% ( 1) 00:10:54.139 7.585 - 7.633: 98.4976% ( 1) 00:10:54.140 7.775 - 7.822: 98.5052% ( 1) 00:10:54.140 7.822 - 7.870: 98.5281% ( 3) 00:10:54.140 7.870 - 7.917: 98.5357% ( 1) 00:10:54.140 7.964 - 8.012: 98.5433% ( 1) 00:10:54.140 8.012 - 8.059: 98.5509% ( 1) 00:10:54.140 8.107 - 8.154: 98.5738% ( 3) 00:10:54.140 8.154 - 8.201: 98.5815% ( 1) 00:10:54.140 8.201 - 8.249: 98.5891% ( 1) 00:10:54.140 8.249 - 8.296: 98.5967% ( 1) 00:10:54.140 8.296 - 8.344: 98.6120% ( 2) 00:10:54.140 8.344 - 8.391: 98.6272% ( 2) 00:10:54.140 8.439 - 8.486: 98.6348% ( 1) 00:10:54.140 8.486 - 8.533: 98.6577% ( 3) 00:10:54.140 8.581 - 8.628: 98.6653% ( 1) 00:10:54.140 8.628 - 8.676: 98.6806% ( 2) 00:10:54.140 8.676 - 8.723: 98.6959% ( 2) 00:10:54.140 8.723 - 8.770: 98.7035% ( 1) 00:10:54.140 8.770 - 8.818: 98.7187% ( 2) 00:10:54.140 8.865 - 8.913: 98.7264% ( 1) 00:10:54.140 8.960 - 9.007: 98.7416% ( 2) 00:10:54.140 9.197 - 9.244: 98.7492% ( 1) 00:10:54.140 9.292 - 9.339: 98.7645% ( 2) 00:10:54.140 9.339 - 9.387: 98.7797% ( 2) 00:10:54.140 9.387 - 9.434: 98.7950% ( 2) 00:10:54.140 9.434 - 9.481: 98.8026% ( 1) 00:10:54.140 9.481 - 9.529: 98.8103% ( 1) 00:10:54.140 9.576 - 9.624: 98.8255% ( 2) 00:10:54.140 9.624 - 9.671: 98.8331% ( 1) 00:10:54.140 9.813 - 9.861: 98.8408% ( 1) 00:10:54.140 9.956 - 10.003: 98.8484% ( 1) 00:10:54.140 10.003 - 10.050: 98.8636% ( 2) 00:10:54.140 10.098 - 10.145: 98.8713% ( 1) 00:10:54.140 10.145 - 10.193: 98.8789% ( 1) 00:10:54.140 10.240 - 10.287: 98.8865% ( 1) 00:10:54.140 10.477 - 10.524: 98.8941% ( 1) 00:10:54.140 10.714 - 10.761: 98.9094% ( 2) 00:10:54.140 10.856 - 10.904: 98.9170% ( 1) 00:10:54.140 10.951 - 10.999: 98.9246% ( 1) 00:10:54.140 11.093 - 11.141: 98.9475% ( 3) 00:10:54.140 11.283 - 11.330: 98.9552% ( 1) 00:10:54.140 11.852 - 11.899: 98.9628% ( 1) 00:10:54.140 11.899 - 11.947: 98.9780% ( 2) 00:10:54.140 12.326 - 12.421: 98.9857% ( 1) 00:10:54.140 12.421 - 12.516: 98.9933% ( 1) 00:10:54.140 12.516 - 12.610: 99.0009% ( 1) 00:10:54.140 12.610 - 12.705: 99.0085% ( 1) 00:10:54.140 12.705 - 12.800: 99.0162% ( 1) 00:10:54.140 12.800 - 12.895: 99.0238% ( 1) 00:10:54.140 12.895 - 12.990: 99.0390% ( 2) 00:10:54.140 12.990 - 13.084: 99.0543% ( 2) 00:10:54.140 13.084 - 13.179: 99.0619% ( 1) 00:10:54.140 13.464 - 13.559: 99.0696% ( 1) 00:10:54.140 13.653 - 13.748: 99.0848% ( 2) 00:10:54.140 13.938 - 14.033: 99.0924% ( 1) 00:10:54.140 14.317 - 14.412: 99.1001% ( 1) 00:10:54.140 14.507 - 14.601: 99.1077% ( 1) 00:10:54.140 15.360 - 15.455: 99.1153% ( 1) 00:10:54.140 17.256 - 17.351: 99.1306% ( 2) 00:10:54.140 17.351 - 17.446: 99.1458% ( 2) 00:10:54.140 17.446 - 17.541: 99.1687% ( 3) 00:10:54.140 17.541 - 17.636: 99.1763% ( 1) 00:10:54.140 17.636 - 17.730: 99.2145% ( 5) 00:10:54.140 17.730 - 17.825: 99.2297% ( 2) 00:10:54.140 17.825 - 17.920: 99.3060% ( 10) 00:10:54.140 17.920 - 18.015: 99.3441% ( 5) 00:10:54.140 18.015 - 18.110: 99.3594% ( 2) 00:10:54.140 18.110 - 18.204: 99.4051% ( 6) 00:10:54.140 18.204 - 18.299: 99.4509% ( 6) 00:10:54.140 18.299 - 18.394: 99.5043% ( 7) 00:10:54.140 18.394 - 18.489: 99.5729% ( 9) 00:10:54.140 18.489 - 18.584: 99.6263% ( 7) 00:10:54.140 18.584 - 18.679: 99.6873% ( 8) 00:10:54.140 18.679 - 18.773: 99.7102% ( 3) 00:10:54.140 18.773 - 18.868: 99.7407% ( 4) 00:10:54.140 18.868 - 18.963: 99.7559% ( 2) 00:10:54.140 18.963 - 19.058: 99.7865% ( 4) 00:10:54.140 19.058 - 19.153: 99.7941% ( 1) 00:10:54.140 19.153 - 19.247: 99.8017% ( 1) 00:10:54.140 19.247 - 19.342: 99.8170% ( 2) 00:10:54.140 19.342 - 19.437: 99.8246% ( 1) 00:10:54.140 19.532 - 19.627: 99.8322% ( 1) 00:10:54.140 20.859 - 20.954: 99.8398% ( 1) 00:10:54.140 20.954 - 21.049: 99.8475% ( 1) 00:10:54.140 21.428 - 21.523: 99.8551% ( 1) 00:10:54.140 22.376 - 22.471: 99.8627% ( 1) 00:10:54.140 23.419 - 23.514: 99.8703% ( 1) 00:10:54.140 24.273 - 24.462: 99.8780% ( 1) 00:10:54.140 26.169 - 26.359: 99.8856% ( 1) 00:10:54.140 26.738 - 26.927: 99.8932% ( 1) 00:10:54.140 27.307 - 27.496: 99.9009% ( 1) 00:10:54.140 29.393 - 29.582: 99.9085% ( 1) 00:10:54.140 3980.705 - 4004.978: 99.9847% ( 10) 00:10:54.140 4004.978 - 4029.250: 100.0000% ( 2) 00:10:54.140 00:10:54.140 Complete histogram 00:10:54.140 ================== 00:10:54.140 Range in us Cumulative Count 00:10:54.140 2.050 - 2.062: 0.0076% ( 1) 00:10:54.140 2.062 - 2.074: 8.3130% ( 1089) 00:10:54.140 2.074 - 2.086: 17.2056% ( 1166) 00:10:54.140 2.086 - 2.098: 20.3173% ( 408) 00:10:54.140 2.098 - 2.110: 50.5339% ( 3962) 00:10:54.140 2.110 - 2.121: 57.5275% ( 917) 00:10:54.140 2.121 - 2.133: 59.1748% ( 216) 00:10:54.140 2.133 - 2.145: 64.3380% ( 677) 00:10:54.140 2.145 - 2.157: 66.3438% ( 263) 00:10:54.140 2.157 - 2.169: 69.6843% ( 438) 00:10:54.140 2.169 - 2.181: 79.2328% ( 1252) 00:10:54.140 2.181 - 2.193: 81.5055% ( 298) 00:10:54.140 2.193 - 2.204: 82.1690% ( 87) 00:10:54.140 2.204 - 2.216: 83.8316% ( 218) 00:10:54.140 2.216 - 2.228: 85.5400% ( 224) 00:10:54.140 2.228 - 2.240: 87.3627% ( 239) 00:10:54.140 2.240 - 2.252: 91.4887% ( 541) 00:10:54.140 2.252 - 2.264: 93.0522% ( 205) 00:10:54.140 2.264 - 2.276: 93.5631% ( 67) 00:10:54.140 2.276 - 2.287: 94.0665% ( 66) 00:10:54.140 2.287 - 2.299: 94.6309% ( 74) 00:10:54.140 2.299 - 2.311: 95.0427% ( 54) 00:10:54.140 2.311 - 2.323: 95.2334% ( 25) 00:10:54.140 2.323 - 2.335: 95.4774% ( 32) 00:10:54.140 2.335 - 2.347: 95.5842% ( 14) 00:10:54.140 2.347 - 2.359: 95.6300% ( 6) 00:10:54.140 2.359 - 2.370: 95.8588% ( 30) 00:10:54.140 2.370 - 2.382: 96.0647% ( 27) 00:10:54.140 2.382 - 2.394: 96.2630% ( 26) 00:10:54.140 2.394 - 2.406: 96.5070% ( 32) 00:10:54.140 2.406 - 2.418: 96.7358% ( 30) 00:10:54.140 2.418 - 2.430: 96.9417% ( 27) 00:10:54.140 2.430 - 2.441: 97.1934% ( 33) 00:10:54.140 2.441 - 2.453: 97.3459% ( 20) 00:10:54.140 2.453 - 2.465: 97.5519% ( 27) 00:10:54.140 2.465 - 2.477: 97.6510% ( 13) 00:10:54.140 2.477 - 2.489: 97.7349% ( 11) 00:10:54.140 2.489 - 2.501: 97.9179% ( 24) 00:10:54.140 2.501 - 2.513: 97.9942% ( 10) 00:10:54.140 2.513 - 2.524: 98.0628% ( 9) 00:10:54.140 2.524 - 2.536: 98.1162% ( 7) 00:10:54.140 2.536 - 2.548: 98.1467% ( 4) 00:10:54.140 2.548 - 2.560: 98.2001% ( 7) 00:10:54.140 2.560 - 2.572: 98.2230% ( 3) 00:10:54.140 2.572 - 2.584: 98.2383% ( 2) 00:10:54.140 2.596 - 2.607: 98.2535% ( 2) 00:10:54.140 2.631 - 2.643: 98.2764% ( 3) 00:10:54.140 2.643 - 2.655: 98.2840% ( 1) 00:10:54.140 2.702 - 2.714: 98.2916% ( 1) 00:10:54.140 2.714 - 2.726: 98.2993% ( 1) 00:10:54.140 2.738 - 2.750: 98.3145% ( 2) 00:10:54.140 2.750 - 2.761: 98.3221% ( 1) 00:10:54.140 2.773 - 2.785: 98.3298% ( 1) 00:10:54.140 2.785 - 2.797: 98.3450% ( 2) 00:10:54.140 2.904 - 2.916: 98.3527% ( 1) 00:10:54.140 3.105 - 3.129: 98.3603% ( 1) 00:10:54.140 3.342 - 3.366: 98.3679% ( 1) 00:10:54.140 3.390 - 3.413: 98.3755% ( 1) 00:10:54.140 3.437 - 3.461: 98.3908% ( 2) 00:10:54.140 3.461 - 3.484: 98.3984% ( 1) 00:10:54.140 3.484 - 3.508: 98.4137% ( 2) 00:10:54.140 3.508 - 3.532: 98.4213% ( 1) 00:10:54.140 3.532 - 3.556: 98.4289% ( 1) 00:10:54.140 3.556 - 3.579: 98.4442% ( 2) 00:10:54.140 3.579 - 3.603: 98.4518% ( 1) 00:10:54.140 3.627 - 3.650: 98.4671% ( 2) 00:10:54.140 3.674 - 3.698: 98.4823% ( 2) 00:10:54.140 3.698 - 3.721: 98.4899% ( 1) 00:10:54.140 3.721 - 3.745: 98.4976% ( 1) 00:10:54.140 3.745 - 3.769: 9[2024-07-16 00:47:28.517694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:54.140 8.5052% ( 1) 00:10:54.140 3.769 - 3.793: 98.5128% ( 1) 00:10:54.140 3.816 - 3.840: 98.5204% ( 1) 00:10:54.140 3.840 - 3.864: 98.5281% ( 1) 00:10:54.140 3.864 - 3.887: 98.5357% ( 1) 00:10:54.140 3.887 - 3.911: 98.5433% ( 1) 00:10:54.140 3.935 - 3.959: 98.5509% ( 1) 00:10:54.140 4.006 - 4.030: 98.5586% ( 1) 00:10:54.140 4.053 - 4.077: 98.5662% ( 1) 00:10:54.140 4.077 - 4.101: 98.5738% ( 1) 00:10:54.140 5.096 - 5.120: 98.5815% ( 1) 00:10:54.140 5.381 - 5.404: 98.5891% ( 1) 00:10:54.140 5.404 - 5.428: 98.5967% ( 1) 00:10:54.140 5.926 - 5.950: 98.6043% ( 1) 00:10:54.140 6.068 - 6.116: 98.6120% ( 1) 00:10:54.140 6.258 - 6.305: 98.6196% ( 1) 00:10:54.140 6.400 - 6.447: 98.6348% ( 2) 00:10:54.140 6.542 - 6.590: 98.6501% ( 2) 00:10:54.140 6.590 - 6.637: 98.6577% ( 1) 00:10:54.140 6.874 - 6.921: 98.6653% ( 1) 00:10:54.140 6.921 - 6.969: 98.6806% ( 2) 00:10:54.140 6.969 - 7.016: 98.6882% ( 1) 00:10:54.140 7.111 - 7.159: 98.6959% ( 1) 00:10:54.140 7.633 - 7.680: 98.7035% ( 1) 00:10:54.140 8.391 - 8.439: 98.7111% ( 1) 00:10:54.140 8.581 - 8.628: 98.7187% ( 1) 00:10:54.140 9.244 - 9.292: 98.7264% ( 1) 00:10:54.140 12.516 - 12.610: 98.7340% ( 1) 00:10:54.141 12.990 - 13.084: 98.7416% ( 1) 00:10:54.141 15.550 - 15.644: 98.7492% ( 1) 00:10:54.141 15.644 - 15.739: 98.7645% ( 2) 00:10:54.141 15.739 - 15.834: 98.7950% ( 4) 00:10:54.141 15.834 - 15.929: 98.8179% ( 3) 00:10:54.141 15.929 - 16.024: 98.8484% ( 4) 00:10:54.141 16.119 - 16.213: 98.9018% ( 7) 00:10:54.141 16.213 - 16.308: 98.9170% ( 2) 00:10:54.141 16.308 - 16.403: 98.9323% ( 2) 00:10:54.141 16.403 - 16.498: 98.9475% ( 2) 00:10:54.141 16.498 - 16.593: 99.0238% ( 10) 00:10:54.141 16.593 - 16.687: 99.1001% ( 10) 00:10:54.141 16.687 - 16.782: 99.1306% ( 4) 00:10:54.141 16.782 - 16.877: 99.2145% ( 11) 00:10:54.141 16.877 - 16.972: 99.2373% ( 3) 00:10:54.141 16.972 - 17.067: 99.2526% ( 2) 00:10:54.141 17.161 - 17.256: 99.2678% ( 2) 00:10:54.141 17.256 - 17.351: 99.2984% ( 4) 00:10:54.141 17.351 - 17.446: 99.3212% ( 3) 00:10:54.141 17.636 - 17.730: 99.3365% ( 2) 00:10:54.141 17.825 - 17.920: 99.3441% ( 1) 00:10:54.141 18.110 - 18.204: 99.3517% ( 1) 00:10:54.141 18.204 - 18.299: 99.3594% ( 1) 00:10:54.141 18.963 - 19.058: 99.3670% ( 1) 00:10:54.141 29.393 - 29.582: 99.3746% ( 1) 00:10:54.141 3980.705 - 4004.978: 99.8703% ( 65) 00:10:54.141 4004.978 - 4029.250: 99.9847% ( 15) 00:10:54.141 4975.881 - 5000.154: 99.9924% ( 1) 00:10:54.141 6990.507 - 7039.052: 100.0000% ( 1) 00:10:54.141 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:54.141 [ 00:10:54.141 { 00:10:54.141 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:54.141 "subtype": "Discovery", 00:10:54.141 "listen_addresses": [], 00:10:54.141 "allow_any_host": true, 00:10:54.141 "hosts": [] 00:10:54.141 }, 00:10:54.141 { 00:10:54.141 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:54.141 "subtype": "NVMe", 00:10:54.141 "listen_addresses": [ 00:10:54.141 { 00:10:54.141 "trtype": "VFIOUSER", 00:10:54.141 "adrfam": "IPv4", 00:10:54.141 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:54.141 "trsvcid": "0" 00:10:54.141 } 00:10:54.141 ], 00:10:54.141 "allow_any_host": true, 00:10:54.141 "hosts": [], 00:10:54.141 "serial_number": "SPDK1", 00:10:54.141 "model_number": "SPDK bdev Controller", 00:10:54.141 "max_namespaces": 32, 00:10:54.141 "min_cntlid": 1, 00:10:54.141 "max_cntlid": 65519, 00:10:54.141 "namespaces": [ 00:10:54.141 { 00:10:54.141 "nsid": 1, 00:10:54.141 "bdev_name": "Malloc1", 00:10:54.141 "name": "Malloc1", 00:10:54.141 "nguid": "8A2A5C211F4347699D7EFF3B0EC5D315", 00:10:54.141 "uuid": "8a2a5c21-1f43-4769-9d7e-ff3b0ec5d315" 00:10:54.141 }, 00:10:54.141 { 00:10:54.141 "nsid": 2, 00:10:54.141 "bdev_name": "Malloc3", 00:10:54.141 "name": "Malloc3", 00:10:54.141 "nguid": "F3EE656440B24B64A52FF832F94A06C3", 00:10:54.141 "uuid": "f3ee6564-40b2-4b64-a52f-f832f94a06c3" 00:10:54.141 } 00:10:54.141 ] 00:10:54.141 }, 00:10:54.141 { 00:10:54.141 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:54.141 "subtype": "NVMe", 00:10:54.141 "listen_addresses": [ 00:10:54.141 { 00:10:54.141 "trtype": "VFIOUSER", 00:10:54.141 "adrfam": "IPv4", 00:10:54.141 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:54.141 "trsvcid": "0" 00:10:54.141 } 00:10:54.141 ], 00:10:54.141 "allow_any_host": true, 00:10:54.141 "hosts": [], 00:10:54.141 "serial_number": "SPDK2", 00:10:54.141 "model_number": "SPDK bdev Controller", 00:10:54.141 "max_namespaces": 32, 00:10:54.141 "min_cntlid": 1, 00:10:54.141 "max_cntlid": 65519, 00:10:54.141 "namespaces": [ 00:10:54.141 { 00:10:54.141 "nsid": 1, 00:10:54.141 "bdev_name": "Malloc2", 00:10:54.141 "name": "Malloc2", 00:10:54.141 "nguid": "881FC6DED65949798B5C020943F01219", 00:10:54.141 "uuid": "881fc6de-d659-4979-8b5c-020943f01219" 00:10:54.141 } 00:10:54.141 ] 00:10:54.141 } 00:10:54.141 ] 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2591344 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:54.141 00:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:54.141 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.399 [2024-07-16 00:47:29.001360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:54.399 Malloc4 00:10:54.399 00:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:54.657 [2024-07-16 00:47:29.344999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:54.657 00:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:54.657 Asynchronous Event Request test 00:10:54.657 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.657 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:54.657 Registering asynchronous event callbacks... 00:10:54.657 Starting namespace attribute notice tests for all controllers... 00:10:54.657 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:54.657 aer_cb - Changed Namespace 00:10:54.657 Cleaning up... 00:10:54.915 [ 00:10:54.915 { 00:10:54.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:54.915 "subtype": "Discovery", 00:10:54.915 "listen_addresses": [], 00:10:54.915 "allow_any_host": true, 00:10:54.915 "hosts": [] 00:10:54.915 }, 00:10:54.915 { 00:10:54.915 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:54.915 "subtype": "NVMe", 00:10:54.915 "listen_addresses": [ 00:10:54.915 { 00:10:54.915 "trtype": "VFIOUSER", 00:10:54.915 "adrfam": "IPv4", 00:10:54.915 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:54.915 "trsvcid": "0" 00:10:54.915 } 00:10:54.915 ], 00:10:54.915 "allow_any_host": true, 00:10:54.915 "hosts": [], 00:10:54.915 "serial_number": "SPDK1", 00:10:54.915 "model_number": "SPDK bdev Controller", 00:10:54.916 "max_namespaces": 32, 00:10:54.916 "min_cntlid": 1, 00:10:54.916 "max_cntlid": 65519, 00:10:54.916 "namespaces": [ 00:10:54.916 { 00:10:54.916 "nsid": 1, 00:10:54.916 "bdev_name": "Malloc1", 00:10:54.916 "name": "Malloc1", 00:10:54.916 "nguid": "8A2A5C211F4347699D7EFF3B0EC5D315", 00:10:54.916 "uuid": "8a2a5c21-1f43-4769-9d7e-ff3b0ec5d315" 00:10:54.916 }, 00:10:54.916 { 00:10:54.916 "nsid": 2, 00:10:54.916 "bdev_name": "Malloc3", 00:10:54.916 "name": "Malloc3", 00:10:54.916 "nguid": "F3EE656440B24B64A52FF832F94A06C3", 00:10:54.916 "uuid": "f3ee6564-40b2-4b64-a52f-f832f94a06c3" 00:10:54.916 } 00:10:54.916 ] 00:10:54.916 }, 00:10:54.916 { 00:10:54.916 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:54.916 "subtype": "NVMe", 00:10:54.916 "listen_addresses": [ 00:10:54.916 { 00:10:54.916 "trtype": "VFIOUSER", 00:10:54.916 "adrfam": "IPv4", 00:10:54.916 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:54.916 "trsvcid": "0" 00:10:54.916 } 00:10:54.916 ], 00:10:54.916 "allow_any_host": true, 00:10:54.916 "hosts": [], 00:10:54.916 "serial_number": "SPDK2", 00:10:54.916 "model_number": "SPDK bdev Controller", 00:10:54.916 "max_namespaces": 32, 00:10:54.916 "min_cntlid": 1, 00:10:54.916 "max_cntlid": 65519, 00:10:54.916 "namespaces": [ 00:10:54.916 { 00:10:54.916 "nsid": 1, 00:10:54.916 "bdev_name": "Malloc2", 00:10:54.916 "name": "Malloc2", 00:10:54.916 "nguid": "881FC6DED65949798B5C020943F01219", 00:10:54.916 "uuid": "881fc6de-d659-4979-8b5c-020943f01219" 00:10:54.916 }, 00:10:54.916 { 00:10:54.916 "nsid": 2, 00:10:54.916 "bdev_name": "Malloc4", 00:10:54.916 "name": "Malloc4", 00:10:54.916 "nguid": "2569E3B736B24D329BF7D376AF8B0B0D", 00:10:54.916 "uuid": "2569e3b7-36b2-4d32-9bf7-d376af8b0b0d" 00:10:54.916 } 00:10:54.916 ] 00:10:54.916 } 00:10:54.916 ] 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2591344 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2585723 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2585723 ']' 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2585723 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2585723 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2585723' 00:10:54.916 killing process with pid 2585723 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2585723 00:10:54.916 00:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2585723 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2591482 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2591482' 00:10:55.484 Process pid: 2591482 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2591482 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2591482 ']' 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.484 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:55.484 [2024-07-16 00:47:30.071062] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:55.484 [2024-07-16 00:47:30.072227] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:10:55.484 [2024-07-16 00:47:30.072287] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.484 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.484 [2024-07-16 00:47:30.129989] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.484 [2024-07-16 00:47:30.236339] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.484 [2024-07-16 00:47:30.236407] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.484 [2024-07-16 00:47:30.236430] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.484 [2024-07-16 00:47:30.236440] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.485 [2024-07-16 00:47:30.236449] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.485 [2024-07-16 00:47:30.236545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.485 [2024-07-16 00:47:30.236611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.485 [2024-07-16 00:47:30.236676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.485 [2024-07-16 00:47:30.236678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.744 [2024-07-16 00:47:30.332219] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:55.744 [2024-07-16 00:47:30.332444] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:55.744 [2024-07-16 00:47:30.332722] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:55.744 [2024-07-16 00:47:30.333322] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:55.744 [2024-07-16 00:47:30.333599] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:55.744 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.744 00:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:55.744 00:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:56.682 00:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:56.940 00:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:56.940 00:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:56.940 00:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:56.940 00:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:56.940 00:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:57.198 Malloc1 00:10:57.198 00:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:57.457 00:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:57.715 00:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:57.973 00:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:57.973 00:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:57.973 00:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:58.231 Malloc2 00:10:58.232 00:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:58.490 00:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:58.747 00:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2591482 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2591482 ']' 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2591482 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2591482 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2591482' 00:10:59.012 killing process with pid 2591482 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2591482 00:10:59.012 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2591482 00:10:59.271 00:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:59.271 00:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:59.271 00:10:59.271 real 0m52.885s 00:10:59.271 user 3m28.727s 00:10:59.271 sys 0m4.177s 00:10:59.271 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.271 00:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:59.271 ************************************ 00:10:59.271 END TEST nvmf_vfio_user 00:10:59.271 ************************************ 00:10:59.271 00:47:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:59.271 00:47:33 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:59.271 00:47:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.271 00:47:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.271 00:47:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.271 ************************************ 00:10:59.271 START TEST nvmf_vfio_user_nvme_compliance 00:10:59.271 ************************************ 00:10:59.271 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:59.529 * Looking for test storage... 00:10:59.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:59.529 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.529 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:59.529 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.529 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2591967 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2591967' 00:10:59.530 Process pid: 2591967 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2591967 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2591967 ']' 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.530 00:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:59.530 [2024-07-16 00:47:34.132951] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:10:59.530 [2024-07-16 00:47:34.133057] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.530 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.530 [2024-07-16 00:47:34.204340] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.790 [2024-07-16 00:47:34.330872] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.790 [2024-07-16 00:47:34.330963] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.790 [2024-07-16 00:47:34.330988] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.790 [2024-07-16 00:47:34.331001] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.790 [2024-07-16 00:47:34.331012] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.790 [2024-07-16 00:47:34.331089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.790 [2024-07-16 00:47:34.331132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.790 [2024-07-16 00:47:34.331137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.725 00:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.725 00:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:11:00.725 00:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 malloc0 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.664 00:47:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:01.664 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.664 00:11:01.664 00:11:01.664 CUnit - A unit testing framework for C - Version 2.1-3 00:11:01.664 http://cunit.sourceforge.net/ 00:11:01.664 00:11:01.664 00:11:01.664 Suite: nvme_compliance 00:11:01.664 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-16 00:47:36.362805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:01.664 [2024-07-16 00:47:36.364334] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:01.664 [2024-07-16 00:47:36.364359] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:01.664 [2024-07-16 00:47:36.364371] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:01.664 [2024-07-16 00:47:36.365827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:01.664 passed 00:11:01.924 Test: admin_identify_ctrlr_verify_fused ...[2024-07-16 00:47:36.457505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:01.924 [2024-07-16 00:47:36.460526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:01.924 passed 00:11:01.924 Test: admin_identify_ns ...[2024-07-16 00:47:36.548246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:01.924 [2024-07-16 00:47:36.609910] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:01.924 [2024-07-16 00:47:36.617894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:01.924 [2024-07-16 00:47:36.639029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:01.924 passed 00:11:02.183 Test: admin_get_features_mandatory_features ...[2024-07-16 00:47:36.719633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.183 [2024-07-16 00:47:36.724668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.183 passed 00:11:02.183 Test: admin_get_features_optional_features ...[2024-07-16 00:47:36.807204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.183 [2024-07-16 00:47:36.810222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.183 passed 00:11:02.183 Test: admin_set_features_number_of_queues ...[2024-07-16 00:47:36.897437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.443 [2024-07-16 00:47:37.002003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.443 passed 00:11:02.443 Test: admin_get_log_page_mandatory_logs ...[2024-07-16 00:47:37.086072] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.443 [2024-07-16 00:47:37.089093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.443 passed 00:11:02.443 Test: admin_get_log_page_with_lpo ...[2024-07-16 00:47:37.172228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.704 [2024-07-16 00:47:37.239893] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:02.704 [2024-07-16 00:47:37.252967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.704 passed 00:11:02.704 Test: fabric_property_get ...[2024-07-16 00:47:37.336112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.704 [2024-07-16 00:47:37.337403] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:02.704 [2024-07-16 00:47:37.339129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.704 passed 00:11:02.704 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-16 00:47:37.425750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.704 [2024-07-16 00:47:37.427071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:02.704 [2024-07-16 00:47:37.428774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.704 passed 00:11:02.990 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-16 00:47:37.510108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.991 [2024-07-16 00:47:37.600902] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:02.991 [2024-07-16 00:47:37.616889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:02.991 [2024-07-16 00:47:37.621997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:02.991 passed 00:11:02.991 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-16 00:47:37.707477] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:02.991 [2024-07-16 00:47:37.708752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:02.991 [2024-07-16 00:47:37.710496] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.250 passed 00:11:03.250 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-16 00:47:37.793661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.250 [2024-07-16 00:47:37.868893] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:03.250 [2024-07-16 00:47:37.892885] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:03.250 [2024-07-16 00:47:37.897993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.250 passed 00:11:03.250 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-16 00:47:37.986137] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.250 [2024-07-16 00:47:37.987417] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:03.250 [2024-07-16 00:47:37.987456] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:03.250 [2024-07-16 00:47:37.989165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.509 passed 00:11:03.509 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-16 00:47:38.072341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.509 [2024-07-16 00:47:38.167887] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:03.509 [2024-07-16 00:47:38.175883] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:03.509 [2024-07-16 00:47:38.183887] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:03.509 [2024-07-16 00:47:38.191885] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:03.509 [2024-07-16 00:47:38.220993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.509 passed 00:11:03.768 Test: admin_create_io_sq_verify_pc ...[2024-07-16 00:47:38.303584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:03.768 [2024-07-16 00:47:38.319898] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:03.768 [2024-07-16 00:47:38.337961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:03.768 passed 00:11:03.768 Test: admin_create_io_qp_max_qps ...[2024-07-16 00:47:38.418473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.145 [2024-07-16 00:47:39.534894] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:05.405 [2024-07-16 00:47:39.918683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.405 passed 00:11:05.405 Test: admin_create_io_sq_shared_cq ...[2024-07-16 00:47:40.002968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.405 [2024-07-16 00:47:40.135901] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:05.665 [2024-07-16 00:47:40.173002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.665 passed 00:11:05.665 00:11:05.665 Run Summary: Type Total Ran Passed Failed Inactive 00:11:05.665 suites 1 1 n/a 0 0 00:11:05.665 tests 18 18 18 0 0 00:11:05.665 asserts 360 360 360 0 n/a 00:11:05.665 00:11:05.665 Elapsed time = 1.582 seconds 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2591967 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2591967 ']' 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2591967 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2591967 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2591967' 00:11:05.665 killing process with pid 2591967 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2591967 00:11:05.665 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2591967 00:11:05.922 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:05.922 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:05.922 00:11:05.922 real 0m6.544s 00:11:05.922 user 0m18.636s 00:11:05.922 sys 0m0.572s 00:11:05.922 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.922 00:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 ************************************ 00:11:05.922 END TEST nvmf_vfio_user_nvme_compliance 00:11:05.922 ************************************ 00:11:05.922 00:47:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:05.922 00:47:40 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:05.922 00:47:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:05.922 00:47:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.922 00:47:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.922 ************************************ 00:11:05.922 START TEST nvmf_vfio_user_fuzz 00:11:05.922 ************************************ 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:05.923 * Looking for test storage... 00:11:05.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.923 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2592811 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2592811' 00:11:06.183 Process pid: 2592811 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2592811 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2592811 ']' 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.183 00:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:06.494 00:47:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.494 00:47:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:06.494 00:47:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.429 malloc0 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:07.429 00:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:39.500 Fuzzing completed. Shutting down the fuzz application 00:11:39.500 00:11:39.500 Dumping successful admin opcodes: 00:11:39.500 8, 9, 10, 24, 00:11:39.500 Dumping successful io opcodes: 00:11:39.500 0, 00:11:39.500 NS: 0x200003a1ef00 I/O qp, Total commands completed: 678905, total successful commands: 2642, random_seed: 1039906432 00:11:39.500 NS: 0x200003a1ef00 admin qp, Total commands completed: 162663, total successful commands: 1313, random_seed: 39938944 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2592811 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2592811 ']' 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2592811 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2592811 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2592811' 00:11:39.500 killing process with pid 2592811 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2592811 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2592811 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:39.500 00:11:39.500 real 0m32.380s 00:11:39.500 user 0m33.275s 00:11:39.500 sys 0m26.713s 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.500 00:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:39.500 ************************************ 00:11:39.500 END TEST nvmf_vfio_user_fuzz 00:11:39.500 ************************************ 00:11:39.500 00:48:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:39.500 00:48:13 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:39.500 00:48:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.500 00:48:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.500 00:48:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.500 ************************************ 00:11:39.500 START TEST nvmf_host_management 00:11:39.500 ************************************ 00:11:39.500 00:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:39.500 * Looking for test storage... 00:11:39.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.501 00:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:40.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.438 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:40.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:40.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:40.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.439 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:11:40.699 00:11:40.699 --- 10.0.0.2 ping statistics --- 00:11:40.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.699 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:11:40.699 00:11:40.699 --- 10.0.0.1 ping statistics --- 00:11:40.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.699 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2598877 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2598877 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2598877 ']' 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.699 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.699 [2024-07-16 00:48:15.295235] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:11:40.699 [2024-07-16 00:48:15.295329] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.699 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.699 [2024-07-16 00:48:15.360647] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.959 [2024-07-16 00:48:15.473563] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.959 [2024-07-16 00:48:15.473620] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.959 [2024-07-16 00:48:15.473647] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.959 [2024-07-16 00:48:15.473658] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.959 [2024-07-16 00:48:15.473667] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.959 [2024-07-16 00:48:15.473756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.959 [2024-07-16 00:48:15.473799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.959 [2024-07-16 00:48:15.473866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:40.959 [2024-07-16 00:48:15.473866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.959 [2024-07-16 00:48:15.636762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.959 Malloc0 00:11:40.959 [2024-07-16 00:48:15.702403] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.959 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2598924 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2598924 /var/tmp/bdevperf.sock 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2598924 ']' 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:41.218 { 00:11:41.218 "params": { 00:11:41.218 "name": "Nvme$subsystem", 00:11:41.218 "trtype": "$TEST_TRANSPORT", 00:11:41.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:41.218 "adrfam": "ipv4", 00:11:41.218 "trsvcid": "$NVMF_PORT", 00:11:41.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:41.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:41.218 "hdgst": ${hdgst:-false}, 00:11:41.218 "ddgst": ${ddgst:-false} 00:11:41.218 }, 00:11:41.218 "method": "bdev_nvme_attach_controller" 00:11:41.218 } 00:11:41.218 EOF 00:11:41.218 )") 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:41.218 00:48:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:41.218 "params": { 00:11:41.218 "name": "Nvme0", 00:11:41.218 "trtype": "tcp", 00:11:41.218 "traddr": "10.0.0.2", 00:11:41.218 "adrfam": "ipv4", 00:11:41.218 "trsvcid": "4420", 00:11:41.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:41.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:41.218 "hdgst": false, 00:11:41.218 "ddgst": false 00:11:41.218 }, 00:11:41.218 "method": "bdev_nvme_attach_controller" 00:11:41.218 }' 00:11:41.218 [2024-07-16 00:48:15.785053] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:11:41.218 [2024-07-16 00:48:15.785135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598924 ] 00:11:41.218 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.218 [2024-07-16 00:48:15.846956] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.218 [2024-07-16 00:48:15.958130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.795 Running I/O for 10 seconds... 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:11:41.795 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=449 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 449 -ge 100 ']' 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.057 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.057 [2024-07-16 00:48:16.661536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.057 [2024-07-16 00:48:16.661810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.661995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff690 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.662754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.058 [2024-07-16 00:48:16.662798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.662815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.058 [2024-07-16 00:48:16.662830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.662845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.058 [2024-07-16 00:48:16.662859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.662874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.058 [2024-07-16 00:48:16.662897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.662912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167a980 is same with the state(5) to be set 00:11:42.058 [2024-07-16 00:48:16.663638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.663981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.663996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.058 [2024-07-16 00:48:16.664349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.058 [2024-07-16 00:48:16.664363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.664969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.664983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:42.059 [2024-07-16 00:48:16.665628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.059 [2024-07-16 00:48:16.665726] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8bca0 was disconnected and freed. reset controller. 00:11:42.060 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.060 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:42.060 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.060 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.060 [2024-07-16 00:48:16.666906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:42.060 task offset: 59008 on job bdev=Nvme0n1 fails 00:11:42.060 00:11:42.060 Latency(us) 00:11:42.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.060 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:42.060 Job: Nvme0n1 ended in about 0.40 seconds with error 00:11:42.060 Verification LBA range: start 0x0 length 0x400 00:11:42.060 Nvme0n1 : 0.40 1154.81 72.18 160.32 0.00 47339.51 2487.94 40972.14 00:11:42.060 =================================================================================================================== 00:11:42.060 Total : 1154.81 72.18 160.32 0.00 47339.51 2487.94 40972.14 00:11:42.060 [2024-07-16 00:48:16.669053] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:42.060 [2024-07-16 00:48:16.669082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167a980 (9): Bad file descriptor 00:11:42.060 00:48:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.060 00:48:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:42.060 [2024-07-16 00:48:16.679043] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2598924 00:11:42.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2598924) - No such process 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:42.997 { 00:11:42.997 "params": { 00:11:42.997 "name": "Nvme$subsystem", 00:11:42.997 "trtype": "$TEST_TRANSPORT", 00:11:42.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.997 "adrfam": "ipv4", 00:11:42.997 "trsvcid": "$NVMF_PORT", 00:11:42.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.997 "hdgst": ${hdgst:-false}, 00:11:42.997 "ddgst": ${ddgst:-false} 00:11:42.997 }, 00:11:42.997 "method": "bdev_nvme_attach_controller" 00:11:42.997 } 00:11:42.997 EOF 00:11:42.997 )") 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:42.997 00:48:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:42.997 "params": { 00:11:42.997 "name": "Nvme0", 00:11:42.997 "trtype": "tcp", 00:11:42.997 "traddr": "10.0.0.2", 00:11:42.997 "adrfam": "ipv4", 00:11:42.997 "trsvcid": "4420", 00:11:42.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:42.997 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:42.997 "hdgst": false, 00:11:42.997 "ddgst": false 00:11:42.997 }, 00:11:42.997 "method": "bdev_nvme_attach_controller" 00:11:42.997 }' 00:11:42.997 [2024-07-16 00:48:17.725515] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:11:42.997 [2024-07-16 00:48:17.725591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599206 ] 00:11:42.997 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.257 [2024-07-16 00:48:17.786048] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.257 [2024-07-16 00:48:17.897838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.517 Running I/O for 1 seconds... 00:11:44.471 00:11:44.471 Latency(us) 00:11:44.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.471 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:44.471 Verification LBA range: start 0x0 length 0x400 00:11:44.471 Nvme0n1 : 1.04 1289.98 80.62 0.00 0.00 48893.76 11553.75 39612.87 00:11:44.471 =================================================================================================================== 00:11:44.471 Total : 1289.98 80.62 0.00 0.00 48893.76 11553.75 39612.87 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.729 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.729 rmmod nvme_tcp 00:11:44.729 rmmod nvme_fabrics 00:11:44.729 rmmod nvme_keyring 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2598877 ']' 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2598877 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2598877 ']' 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2598877 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2598877 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2598877' 00:11:44.988 killing process with pid 2598877 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2598877 00:11:44.988 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2598877 00:11:45.247 [2024-07-16 00:48:19.792970] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.247 00:48:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.154 00:48:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:47.154 00:48:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:47.154 00:11:47.154 real 0m8.820s 00:11:47.154 user 0m19.779s 00:11:47.154 sys 0m2.656s 00:11:47.154 00:48:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.154 00:48:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.154 ************************************ 00:11:47.154 END TEST nvmf_host_management 00:11:47.154 ************************************ 00:11:47.154 00:48:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:47.154 00:48:21 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:47.154 00:48:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:47.154 00:48:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.154 00:48:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:47.412 ************************************ 00:11:47.412 START TEST nvmf_lvol 00:11:47.412 ************************************ 00:11:47.412 00:48:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:47.412 * Looking for test storage... 00:11:47.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.412 00:48:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.412 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:47.412 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:47.413 00:48:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.314 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.314 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:49.315 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:49.315 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:49.315 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:49.315 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.315 00:48:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.315 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.315 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.315 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.315 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.315 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:11:49.572 00:11:49.572 --- 10.0.0.2 ping statistics --- 00:11:49.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.572 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:49.572 00:11:49.572 --- 10.0.0.1 ping statistics --- 00:11:49.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.572 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2601352 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2601352 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2601352 ']' 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.572 00:48:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.572 [2024-07-16 00:48:24.173826] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:11:49.572 [2024-07-16 00:48:24.173918] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.572 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.572 [2024-07-16 00:48:24.242345] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.831 [2024-07-16 00:48:24.358308] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.831 [2024-07-16 00:48:24.358367] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.831 [2024-07-16 00:48:24.358384] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.831 [2024-07-16 00:48:24.358397] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.831 [2024-07-16 00:48:24.358408] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.831 [2024-07-16 00:48:24.358488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.831 [2024-07-16 00:48:24.358558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.831 [2024-07-16 00:48:24.358561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.397 00:48:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.397 00:48:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:50.397 00:48:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.397 00:48:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.397 00:48:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:50.397 00:48:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.397 00:48:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:50.656 [2024-07-16 00:48:25.390964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.915 00:48:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:51.173 00:48:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:51.173 00:48:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:51.431 00:48:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:51.431 00:48:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:51.688 00:48:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:51.946 00:48:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4888764e-2d6d-4183-a65f-77e98a87b824 00:11:51.946 00:48:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4888764e-2d6d-4183-a65f-77e98a87b824 lvol 20 00:11:52.204 00:48:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a5bcc910-102b-47a0-9e73-b010b546a80a 00:11:52.204 00:48:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:52.462 00:48:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5bcc910-102b-47a0-9e73-b010b546a80a 00:11:52.720 00:48:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:52.978 [2024-07-16 00:48:27.615932] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.978 00:48:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:53.236 00:48:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2601838 00:11:53.236 00:48:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:53.236 00:48:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:53.236 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.172 00:48:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a5bcc910-102b-47a0-9e73-b010b546a80a MY_SNAPSHOT 00:11:54.741 00:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2d69f3c2-68ff-498c-bb6a-e0dc4eaa029c 00:11:54.741 00:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a5bcc910-102b-47a0-9e73-b010b546a80a 30 00:11:54.999 00:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2d69f3c2-68ff-498c-bb6a-e0dc4eaa029c MY_CLONE 00:11:55.259 00:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9f70c70e-eeec-4f7a-bc7a-1097afc40e1f 00:11:55.259 00:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9f70c70e-eeec-4f7a-bc7a-1097afc40e1f 00:11:55.829 00:48:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2601838 00:12:04.000 Initializing NVMe Controllers 00:12:04.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:04.000 Controller IO queue size 128, less than required. 00:12:04.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:04.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:04.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:04.000 Initialization complete. Launching workers. 00:12:04.000 ======================================================== 00:12:04.000 Latency(us) 00:12:04.000 Device Information : IOPS MiB/s Average min max 00:12:04.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9137.70 35.69 14010.23 2481.11 103661.34 00:12:04.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10965.60 42.83 11680.28 2135.95 69810.50 00:12:04.000 ======================================================== 00:12:04.000 Total : 20103.29 78.53 12739.33 2135.95 103661.34 00:12:04.000 00:12:04.000 00:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:04.000 00:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a5bcc910-102b-47a0-9e73-b010b546a80a 00:12:04.258 00:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4888764e-2d6d-4183-a65f-77e98a87b824 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.518 rmmod nvme_tcp 00:12:04.518 rmmod nvme_fabrics 00:12:04.518 rmmod nvme_keyring 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2601352 ']' 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2601352 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2601352 ']' 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2601352 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2601352 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2601352' 00:12:04.518 killing process with pid 2601352 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2601352 00:12:04.518 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2601352 00:12:04.776 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.776 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.777 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.777 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.777 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.777 00:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.777 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.777 00:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.315 00:12:07.315 real 0m19.640s 00:12:07.315 user 1m5.200s 00:12:07.315 sys 0m6.432s 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:07.315 ************************************ 00:12:07.315 END TEST nvmf_lvol 00:12:07.315 ************************************ 00:12:07.315 00:48:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:07.315 00:48:41 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:07.315 00:48:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.315 00:48:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.315 00:48:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.315 ************************************ 00:12:07.315 START TEST nvmf_lvs_grow 00:12:07.315 ************************************ 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:07.315 * Looking for test storage... 00:12:07.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.315 00:48:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.229 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:09.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:09.230 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:09.230 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:09.230 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:09.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:12:09.230 00:12:09.230 --- 10.0.0.2 ping statistics --- 00:12:09.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.230 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:12:09.230 00:12:09.230 --- 10.0.0.1 ping statistics --- 00:12:09.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.230 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2605102 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2605102 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2605102 ']' 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.230 00:48:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.230 [2024-07-16 00:48:43.770431] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:09.230 [2024-07-16 00:48:43.770531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.230 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.230 [2024-07-16 00:48:43.838280] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.230 [2024-07-16 00:48:43.954494] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.230 [2024-07-16 00:48:43.954545] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.230 [2024-07-16 00:48:43.954571] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.230 [2024-07-16 00:48:43.954584] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.230 [2024-07-16 00:48:43.954596] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.230 [2024-07-16 00:48:43.954625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.163 00:48:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.163 00:48:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:10.163 00:48:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.163 00:48:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.163 00:48:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.163 00:48:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.163 00:48:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:10.421 [2024-07-16 00:48:45.057617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.421 ************************************ 00:12:10.421 START TEST lvs_grow_clean 00:12:10.421 ************************************ 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:10.421 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:10.679 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:10.679 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:11.247 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:11.247 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:11.247 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:11.247 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:11.247 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:11.247 00:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f lvol 150 00:12:11.506 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=96ac2645-ab0b-431c-9e4d-dfdd79b2acc5 00:12:11.506 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.506 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:11.764 [2024-07-16 00:48:46.422046] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:11.764 [2024-07-16 00:48:46.422129] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:11.764 true 00:12:11.764 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:11.764 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:12.022 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:12.022 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:12.279 00:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96ac2645-ab0b-431c-9e4d-dfdd79b2acc5 00:12:12.537 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:12.795 [2024-07-16 00:48:47.437157] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.795 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2605582 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2605582 /var/tmp/bdevperf.sock 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2605582 ']' 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:13.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.053 00:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:13.053 [2024-07-16 00:48:47.740838] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:13.053 [2024-07-16 00:48:47.740937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605582 ] 00:12:13.053 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.053 [2024-07-16 00:48:47.803273] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.309 [2024-07-16 00:48:47.919468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.309 00:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.309 00:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:13.309 00:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:13.876 Nvme0n1 00:12:13.876 00:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:14.134 [ 00:12:14.134 { 00:12:14.134 "name": "Nvme0n1", 00:12:14.134 "aliases": [ 00:12:14.134 "96ac2645-ab0b-431c-9e4d-dfdd79b2acc5" 00:12:14.134 ], 00:12:14.134 "product_name": "NVMe disk", 00:12:14.134 "block_size": 4096, 00:12:14.134 "num_blocks": 38912, 00:12:14.134 "uuid": "96ac2645-ab0b-431c-9e4d-dfdd79b2acc5", 00:12:14.134 "assigned_rate_limits": { 00:12:14.134 "rw_ios_per_sec": 0, 00:12:14.134 "rw_mbytes_per_sec": 0, 00:12:14.134 "r_mbytes_per_sec": 0, 00:12:14.134 "w_mbytes_per_sec": 0 00:12:14.134 }, 00:12:14.134 "claimed": false, 00:12:14.134 "zoned": false, 00:12:14.134 "supported_io_types": { 00:12:14.134 "read": true, 00:12:14.134 "write": true, 00:12:14.134 "unmap": true, 00:12:14.134 "flush": true, 00:12:14.134 "reset": true, 00:12:14.134 "nvme_admin": true, 00:12:14.134 "nvme_io": true, 00:12:14.134 "nvme_io_md": false, 00:12:14.134 "write_zeroes": true, 00:12:14.134 "zcopy": false, 00:12:14.134 "get_zone_info": false, 00:12:14.134 "zone_management": false, 00:12:14.134 "zone_append": false, 00:12:14.134 "compare": true, 00:12:14.134 "compare_and_write": true, 00:12:14.134 "abort": true, 00:12:14.134 "seek_hole": false, 00:12:14.134 "seek_data": false, 00:12:14.134 "copy": true, 00:12:14.134 "nvme_iov_md": false 00:12:14.134 }, 00:12:14.134 "memory_domains": [ 00:12:14.134 { 00:12:14.134 "dma_device_id": "system", 00:12:14.134 "dma_device_type": 1 00:12:14.134 } 00:12:14.134 ], 00:12:14.134 "driver_specific": { 00:12:14.134 "nvme": [ 00:12:14.134 { 00:12:14.134 "trid": { 00:12:14.134 "trtype": "TCP", 00:12:14.134 "adrfam": "IPv4", 00:12:14.134 "traddr": "10.0.0.2", 00:12:14.134 "trsvcid": "4420", 00:12:14.134 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:14.134 }, 00:12:14.134 "ctrlr_data": { 00:12:14.134 "cntlid": 1, 00:12:14.134 "vendor_id": "0x8086", 00:12:14.134 "model_number": "SPDK bdev Controller", 00:12:14.134 "serial_number": "SPDK0", 00:12:14.134 "firmware_revision": "24.09", 00:12:14.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:14.134 "oacs": { 00:12:14.134 "security": 0, 00:12:14.134 "format": 0, 00:12:14.134 "firmware": 0, 00:12:14.134 "ns_manage": 0 00:12:14.134 }, 00:12:14.134 "multi_ctrlr": true, 00:12:14.134 "ana_reporting": false 00:12:14.134 }, 00:12:14.134 "vs": { 00:12:14.134 "nvme_version": "1.3" 00:12:14.134 }, 00:12:14.134 "ns_data": { 00:12:14.134 "id": 1, 00:12:14.134 "can_share": true 00:12:14.134 } 00:12:14.134 } 00:12:14.134 ], 00:12:14.134 "mp_policy": "active_passive" 00:12:14.134 } 00:12:14.134 } 00:12:14.134 ] 00:12:14.134 00:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2605687 00:12:14.134 00:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:14.134 00:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:14.134 Running I/O for 10 seconds... 00:12:15.073 Latency(us) 00:12:15.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.073 Nvme0n1 : 1.00 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:12:15.073 =================================================================================================================== 00:12:15.073 Total : 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:12:15.073 00:12:16.011 00:48:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:16.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.269 Nvme0n1 : 2.00 14148.00 55.27 0.00 0.00 0.00 0.00 0.00 00:12:16.269 =================================================================================================================== 00:12:16.269 Total : 14148.00 55.27 0.00 0.00 0.00 0.00 0.00 00:12:16.269 00:12:16.269 true 00:12:16.269 00:48:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:16.269 00:48:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:16.530 00:48:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:16.530 00:48:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:16.530 00:48:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2605687 00:12:17.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.101 Nvme0n1 : 3.00 14274.33 55.76 0.00 0.00 0.00 0.00 0.00 00:12:17.101 =================================================================================================================== 00:12:17.101 Total : 14274.33 55.76 0.00 0.00 0.00 0.00 0.00 00:12:17.101 00:12:18.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.039 Nvme0n1 : 4.00 14294.00 55.84 0.00 0.00 0.00 0.00 0.00 00:12:18.039 =================================================================================================================== 00:12:18.039 Total : 14294.00 55.84 0.00 0.00 0.00 0.00 0.00 00:12:18.039 00:12:19.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.421 Nvme0n1 : 5.00 14324.60 55.96 0.00 0.00 0.00 0.00 0.00 00:12:19.421 =================================================================================================================== 00:12:19.421 Total : 14324.60 55.96 0.00 0.00 0.00 0.00 0.00 00:12:19.421 00:12:20.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.360 Nvme0n1 : 6.00 14401.33 56.26 0.00 0.00 0.00 0.00 0.00 00:12:20.360 =================================================================================================================== 00:12:20.360 Total : 14401.33 56.26 0.00 0.00 0.00 0.00 0.00 00:12:20.360 00:12:21.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.297 Nvme0n1 : 7.00 14465.14 56.50 0.00 0.00 0.00 0.00 0.00 00:12:21.297 =================================================================================================================== 00:12:21.297 Total : 14465.14 56.50 0.00 0.00 0.00 0.00 0.00 00:12:21.297 00:12:22.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.234 Nvme0n1 : 8.00 14472.88 56.53 0.00 0.00 0.00 0.00 0.00 00:12:22.234 =================================================================================================================== 00:12:22.234 Total : 14472.88 56.53 0.00 0.00 0.00 0.00 0.00 00:12:22.234 00:12:23.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.172 Nvme0n1 : 9.00 14486.11 56.59 0.00 0.00 0.00 0.00 0.00 00:12:23.172 =================================================================================================================== 00:12:23.172 Total : 14486.11 56.59 0.00 0.00 0.00 0.00 0.00 00:12:23.172 00:12:24.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.112 Nvme0n1 : 10.00 14522.40 56.73 0.00 0.00 0.00 0.00 0.00 00:12:24.112 =================================================================================================================== 00:12:24.112 Total : 14522.40 56.73 0.00 0.00 0.00 0.00 0.00 00:12:24.112 00:12:24.112 00:12:24.112 Latency(us) 00:12:24.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.112 Nvme0n1 : 10.01 14524.88 56.74 0.00 0.00 8806.29 5048.70 17087.91 00:12:24.112 =================================================================================================================== 00:12:24.112 Total : 14524.88 56.74 0.00 0.00 8806.29 5048.70 17087.91 00:12:24.112 0 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2605582 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2605582 ']' 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2605582 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2605582 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2605582' 00:12:24.112 killing process with pid 2605582 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2605582 00:12:24.112 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.112 00:12:24.112 Latency(us) 00:12:24.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.112 =================================================================================================================== 00:12:24.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:24.112 00:48:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2605582 00:12:24.679 00:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.679 00:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:25.247 00:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:25.247 00:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:25.247 00:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:25.247 00:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:25.247 00:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:25.506 [2024-07-16 00:49:00.169674] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:25.506 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:25.765 request: 00:12:25.765 { 00:12:25.765 "uuid": "340ca87f-f3bc-48e3-8c9a-665b10727b9f", 00:12:25.766 "method": "bdev_lvol_get_lvstores", 00:12:25.766 "req_id": 1 00:12:25.766 } 00:12:25.766 Got JSON-RPC error response 00:12:25.766 response: 00:12:25.766 { 00:12:25.766 "code": -19, 00:12:25.766 "message": "No such device" 00:12:25.766 } 00:12:25.766 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:25.766 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.766 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.766 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.766 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.025 aio_bdev 00:12:26.025 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 96ac2645-ab0b-431c-9e4d-dfdd79b2acc5 00:12:26.025 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=96ac2645-ab0b-431c-9e4d-dfdd79b2acc5 00:12:26.025 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:26.025 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:26.025 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:26.025 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:26.025 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:26.285 00:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96ac2645-ab0b-431c-9e4d-dfdd79b2acc5 -t 2000 00:12:26.544 [ 00:12:26.544 { 00:12:26.544 "name": "96ac2645-ab0b-431c-9e4d-dfdd79b2acc5", 00:12:26.544 "aliases": [ 00:12:26.544 "lvs/lvol" 00:12:26.544 ], 00:12:26.544 "product_name": "Logical Volume", 00:12:26.544 "block_size": 4096, 00:12:26.544 "num_blocks": 38912, 00:12:26.544 "uuid": "96ac2645-ab0b-431c-9e4d-dfdd79b2acc5", 00:12:26.544 "assigned_rate_limits": { 00:12:26.544 "rw_ios_per_sec": 0, 00:12:26.544 "rw_mbytes_per_sec": 0, 00:12:26.544 "r_mbytes_per_sec": 0, 00:12:26.544 "w_mbytes_per_sec": 0 00:12:26.544 }, 00:12:26.544 "claimed": false, 00:12:26.544 "zoned": false, 00:12:26.544 "supported_io_types": { 00:12:26.544 "read": true, 00:12:26.544 "write": true, 00:12:26.544 "unmap": true, 00:12:26.544 "flush": false, 00:12:26.544 "reset": true, 00:12:26.544 "nvme_admin": false, 00:12:26.544 "nvme_io": false, 00:12:26.544 "nvme_io_md": false, 00:12:26.544 "write_zeroes": true, 00:12:26.544 "zcopy": false, 00:12:26.544 "get_zone_info": false, 00:12:26.544 "zone_management": false, 00:12:26.544 "zone_append": false, 00:12:26.544 "compare": false, 00:12:26.544 "compare_and_write": false, 00:12:26.544 "abort": false, 00:12:26.544 "seek_hole": true, 00:12:26.544 "seek_data": true, 00:12:26.544 "copy": false, 00:12:26.544 "nvme_iov_md": false 00:12:26.544 }, 00:12:26.544 "driver_specific": { 00:12:26.544 "lvol": { 00:12:26.544 "lvol_store_uuid": "340ca87f-f3bc-48e3-8c9a-665b10727b9f", 00:12:26.544 "base_bdev": "aio_bdev", 00:12:26.544 "thin_provision": false, 00:12:26.544 "num_allocated_clusters": 38, 00:12:26.544 "snapshot": false, 00:12:26.544 "clone": false, 00:12:26.544 "esnap_clone": false 00:12:26.544 } 00:12:26.544 } 00:12:26.544 } 00:12:26.544 ] 00:12:26.544 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:26.544 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:26.544 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:26.802 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:26.802 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:26.802 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:27.061 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:27.061 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96ac2645-ab0b-431c-9e4d-dfdd79b2acc5 00:12:27.319 00:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 340ca87f-f3bc-48e3-8c9a-665b10727b9f 00:12:27.578 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.837 00:12:27.837 real 0m17.431s 00:12:27.837 user 0m16.940s 00:12:27.837 sys 0m1.918s 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:27.837 ************************************ 00:12:27.837 END TEST lvs_grow_clean 00:12:27.837 ************************************ 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:27.837 ************************************ 00:12:27.837 START TEST lvs_grow_dirty 00:12:27.837 ************************************ 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.837 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:28.403 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:28.403 00:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:28.403 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:28.403 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:28.403 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:28.661 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:28.661 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:28.661 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 lvol 150 00:12:28.918 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9d7a00a5-b734-4722-9056-103a6a5bd8dc 00:12:28.918 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:28.918 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:29.175 [2024-07-16 00:49:03.896149] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:29.175 [2024-07-16 00:49:03.896255] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:29.175 true 00:12:29.175 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:29.175 00:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:29.432 00:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:29.433 00:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:29.997 00:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9d7a00a5-b734-4722-9056-103a6a5bd8dc 00:12:29.997 00:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:30.257 [2024-07-16 00:49:04.939370] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.257 00:49:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2607719 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2607719 /var/tmp/bdevperf.sock 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2607719 ']' 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:30.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.515 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:30.515 [2024-07-16 00:49:05.240901] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:30.515 [2024-07-16 00:49:05.240982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607719 ] 00:12:30.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.773 [2024-07-16 00:49:05.302551] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.773 [2024-07-16 00:49:05.419054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.030 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.030 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:31.030 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:31.288 Nvme0n1 00:12:31.288 00:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:31.544 [ 00:12:31.544 { 00:12:31.544 "name": "Nvme0n1", 00:12:31.544 "aliases": [ 00:12:31.544 "9d7a00a5-b734-4722-9056-103a6a5bd8dc" 00:12:31.544 ], 00:12:31.544 "product_name": "NVMe disk", 00:12:31.544 "block_size": 4096, 00:12:31.544 "num_blocks": 38912, 00:12:31.544 "uuid": "9d7a00a5-b734-4722-9056-103a6a5bd8dc", 00:12:31.544 "assigned_rate_limits": { 00:12:31.544 "rw_ios_per_sec": 0, 00:12:31.544 "rw_mbytes_per_sec": 0, 00:12:31.544 "r_mbytes_per_sec": 0, 00:12:31.544 "w_mbytes_per_sec": 0 00:12:31.544 }, 00:12:31.544 "claimed": false, 00:12:31.544 "zoned": false, 00:12:31.544 "supported_io_types": { 00:12:31.544 "read": true, 00:12:31.544 "write": true, 00:12:31.544 "unmap": true, 00:12:31.544 "flush": true, 00:12:31.544 "reset": true, 00:12:31.544 "nvme_admin": true, 00:12:31.544 "nvme_io": true, 00:12:31.544 "nvme_io_md": false, 00:12:31.544 "write_zeroes": true, 00:12:31.544 "zcopy": false, 00:12:31.544 "get_zone_info": false, 00:12:31.544 "zone_management": false, 00:12:31.544 "zone_append": false, 00:12:31.544 "compare": true, 00:12:31.544 "compare_and_write": true, 00:12:31.544 "abort": true, 00:12:31.544 "seek_hole": false, 00:12:31.544 "seek_data": false, 00:12:31.544 "copy": true, 00:12:31.544 "nvme_iov_md": false 00:12:31.544 }, 00:12:31.544 "memory_domains": [ 00:12:31.544 { 00:12:31.544 "dma_device_id": "system", 00:12:31.544 "dma_device_type": 1 00:12:31.544 } 00:12:31.544 ], 00:12:31.544 "driver_specific": { 00:12:31.544 "nvme": [ 00:12:31.544 { 00:12:31.544 "trid": { 00:12:31.544 "trtype": "TCP", 00:12:31.544 "adrfam": "IPv4", 00:12:31.544 "traddr": "10.0.0.2", 00:12:31.544 "trsvcid": "4420", 00:12:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:31.544 }, 00:12:31.544 "ctrlr_data": { 00:12:31.544 "cntlid": 1, 00:12:31.544 "vendor_id": "0x8086", 00:12:31.544 "model_number": "SPDK bdev Controller", 00:12:31.544 "serial_number": "SPDK0", 00:12:31.544 "firmware_revision": "24.09", 00:12:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:31.544 "oacs": { 00:12:31.544 "security": 0, 00:12:31.544 "format": 0, 00:12:31.544 "firmware": 0, 00:12:31.544 "ns_manage": 0 00:12:31.544 }, 00:12:31.544 "multi_ctrlr": true, 00:12:31.544 "ana_reporting": false 00:12:31.544 }, 00:12:31.544 "vs": { 00:12:31.544 "nvme_version": "1.3" 00:12:31.544 }, 00:12:31.544 "ns_data": { 00:12:31.544 "id": 1, 00:12:31.544 "can_share": true 00:12:31.544 } 00:12:31.544 } 00:12:31.544 ], 00:12:31.544 "mp_policy": "active_passive" 00:12:31.544 } 00:12:31.544 } 00:12:31.544 ] 00:12:31.544 00:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2607853 00:12:31.544 00:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:31.544 00:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:31.802 Running I/O for 10 seconds... 00:12:32.755 Latency(us) 00:12:32.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.755 Nvme0n1 : 1.00 14404.00 56.27 0.00 0.00 0.00 0.00 0.00 00:12:32.755 =================================================================================================================== 00:12:32.755 Total : 14404.00 56.27 0.00 0.00 0.00 0.00 0.00 00:12:32.755 00:12:33.713 00:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:33.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.713 Nvme0n1 : 2.00 14306.00 55.88 0.00 0.00 0.00 0.00 0.00 00:12:33.713 =================================================================================================================== 00:12:33.713 Total : 14306.00 55.88 0.00 0.00 0.00 0.00 0.00 00:12:33.713 00:12:33.970 true 00:12:33.970 00:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:33.970 00:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:34.228 00:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:34.228 00:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:34.228 00:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2607853 00:12:34.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.795 Nvme0n1 : 3.00 14295.00 55.84 0.00 0.00 0.00 0.00 0.00 00:12:34.795 =================================================================================================================== 00:12:34.795 Total : 14295.00 55.84 0.00 0.00 0.00 0.00 0.00 00:12:34.795 00:12:35.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.731 Nvme0n1 : 4.00 14385.00 56.19 0.00 0.00 0.00 0.00 0.00 00:12:35.731 =================================================================================================================== 00:12:35.731 Total : 14385.00 56.19 0.00 0.00 0.00 0.00 0.00 00:12:35.731 00:12:36.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.667 Nvme0n1 : 5.00 14413.80 56.30 0.00 0.00 0.00 0.00 0.00 00:12:36.667 =================================================================================================================== 00:12:36.667 Total : 14413.80 56.30 0.00 0.00 0.00 0.00 0.00 00:12:36.667 00:12:38.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.046 Nvme0n1 : 6.00 14454.00 56.46 0.00 0.00 0.00 0.00 0.00 00:12:38.046 =================================================================================================================== 00:12:38.046 Total : 14454.00 56.46 0.00 0.00 0.00 0.00 0.00 00:12:38.046 00:12:38.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.983 Nvme0n1 : 7.00 14464.71 56.50 0.00 0.00 0.00 0.00 0.00 00:12:38.983 =================================================================================================================== 00:12:38.983 Total : 14464.71 56.50 0.00 0.00 0.00 0.00 0.00 00:12:38.983 00:12:39.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.921 Nvme0n1 : 8.00 14480.62 56.56 0.00 0.00 0.00 0.00 0.00 00:12:39.921 =================================================================================================================== 00:12:39.921 Total : 14480.62 56.56 0.00 0.00 0.00 0.00 0.00 00:12:39.921 00:12:40.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.859 Nvme0n1 : 9.00 14507.22 56.67 0.00 0.00 0.00 0.00 0.00 00:12:40.859 =================================================================================================================== 00:12:40.859 Total : 14507.22 56.67 0.00 0.00 0.00 0.00 0.00 00:12:40.859 00:12:41.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.799 Nvme0n1 : 10.00 14547.60 56.83 0.00 0.00 0.00 0.00 0.00 00:12:41.799 =================================================================================================================== 00:12:41.799 Total : 14547.60 56.83 0.00 0.00 0.00 0.00 0.00 00:12:41.799 00:12:41.799 00:12:41.799 Latency(us) 00:12:41.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.799 Nvme0n1 : 10.01 14546.62 56.82 0.00 0.00 8792.77 2487.94 12524.66 00:12:41.799 =================================================================================================================== 00:12:41.799 Total : 14546.62 56.82 0.00 0.00 8792.77 2487.94 12524.66 00:12:41.799 0 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2607719 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2607719 ']' 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2607719 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2607719 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2607719' 00:12:41.799 killing process with pid 2607719 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2607719 00:12:41.799 Received shutdown signal, test time was about 10.000000 seconds 00:12:41.799 00:12:41.799 Latency(us) 00:12:41.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.799 =================================================================================================================== 00:12:41.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:41.799 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2607719 00:12:42.056 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.313 00:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:42.571 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:42.572 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2605102 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2605102 00:12:42.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2605102 Killed "${NVMF_APP[@]}" "$@" 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2609187 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2609187 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2609187 ']' 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.830 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.831 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:42.831 [2024-07-16 00:49:17.576696] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:42.831 [2024-07-16 00:49:17.576769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.090 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.090 [2024-07-16 00:49:17.645935] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.090 [2024-07-16 00:49:17.760439] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.090 [2024-07-16 00:49:17.760505] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.090 [2024-07-16 00:49:17.760522] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.090 [2024-07-16 00:49:17.760536] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.090 [2024-07-16 00:49:17.760548] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.090 [2024-07-16 00:49:17.760585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.349 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.349 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:43.349 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.349 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.349 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:43.349 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.349 00:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:43.608 [2024-07-16 00:49:18.132740] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:43.608 [2024-07-16 00:49:18.132903] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:43.608 [2024-07-16 00:49:18.132962] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9d7a00a5-b734-4722-9056-103a6a5bd8dc 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9d7a00a5-b734-4722-9056-103a6a5bd8dc 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:43.608 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:43.867 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d7a00a5-b734-4722-9056-103a6a5bd8dc -t 2000 00:12:44.125 [ 00:12:44.125 { 00:12:44.125 "name": "9d7a00a5-b734-4722-9056-103a6a5bd8dc", 00:12:44.125 "aliases": [ 00:12:44.125 "lvs/lvol" 00:12:44.125 ], 00:12:44.125 "product_name": "Logical Volume", 00:12:44.125 "block_size": 4096, 00:12:44.125 "num_blocks": 38912, 00:12:44.125 "uuid": "9d7a00a5-b734-4722-9056-103a6a5bd8dc", 00:12:44.125 "assigned_rate_limits": { 00:12:44.125 "rw_ios_per_sec": 0, 00:12:44.125 "rw_mbytes_per_sec": 0, 00:12:44.125 "r_mbytes_per_sec": 0, 00:12:44.125 "w_mbytes_per_sec": 0 00:12:44.125 }, 00:12:44.125 "claimed": false, 00:12:44.125 "zoned": false, 00:12:44.125 "supported_io_types": { 00:12:44.125 "read": true, 00:12:44.125 "write": true, 00:12:44.125 "unmap": true, 00:12:44.125 "flush": false, 00:12:44.125 "reset": true, 00:12:44.125 "nvme_admin": false, 00:12:44.125 "nvme_io": false, 00:12:44.125 "nvme_io_md": false, 00:12:44.126 "write_zeroes": true, 00:12:44.126 "zcopy": false, 00:12:44.126 "get_zone_info": false, 00:12:44.126 "zone_management": false, 00:12:44.126 "zone_append": false, 00:12:44.126 "compare": false, 00:12:44.126 "compare_and_write": false, 00:12:44.126 "abort": false, 00:12:44.126 "seek_hole": true, 00:12:44.126 "seek_data": true, 00:12:44.126 "copy": false, 00:12:44.126 "nvme_iov_md": false 00:12:44.126 }, 00:12:44.126 "driver_specific": { 00:12:44.126 "lvol": { 00:12:44.126 "lvol_store_uuid": "5bee0f1f-bac6-4db7-be32-a3dae289a7c5", 00:12:44.126 "base_bdev": "aio_bdev", 00:12:44.126 "thin_provision": false, 00:12:44.126 "num_allocated_clusters": 38, 00:12:44.126 "snapshot": false, 00:12:44.126 "clone": false, 00:12:44.126 "esnap_clone": false 00:12:44.126 } 00:12:44.126 } 00:12:44.126 } 00:12:44.126 ] 00:12:44.126 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:44.126 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:44.126 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:44.384 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:44.384 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:44.384 00:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:44.642 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:44.642 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:44.642 [2024-07-16 00:49:19.393755] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:44.901 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:45.160 request: 00:12:45.160 { 00:12:45.160 "uuid": "5bee0f1f-bac6-4db7-be32-a3dae289a7c5", 00:12:45.160 "method": "bdev_lvol_get_lvstores", 00:12:45.160 "req_id": 1 00:12:45.160 } 00:12:45.160 Got JSON-RPC error response 00:12:45.160 response: 00:12:45.160 { 00:12:45.160 "code": -19, 00:12:45.160 "message": "No such device" 00:12:45.160 } 00:12:45.160 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:45.160 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:45.160 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:45.160 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:45.160 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:45.420 aio_bdev 00:12:45.420 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9d7a00a5-b734-4722-9056-103a6a5bd8dc 00:12:45.420 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9d7a00a5-b734-4722-9056-103a6a5bd8dc 00:12:45.420 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:45.420 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:45.420 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:45.420 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:45.420 00:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:45.679 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d7a00a5-b734-4722-9056-103a6a5bd8dc -t 2000 00:12:45.680 [ 00:12:45.680 { 00:12:45.680 "name": "9d7a00a5-b734-4722-9056-103a6a5bd8dc", 00:12:45.680 "aliases": [ 00:12:45.680 "lvs/lvol" 00:12:45.680 ], 00:12:45.680 "product_name": "Logical Volume", 00:12:45.680 "block_size": 4096, 00:12:45.680 "num_blocks": 38912, 00:12:45.680 "uuid": "9d7a00a5-b734-4722-9056-103a6a5bd8dc", 00:12:45.680 "assigned_rate_limits": { 00:12:45.680 "rw_ios_per_sec": 0, 00:12:45.680 "rw_mbytes_per_sec": 0, 00:12:45.680 "r_mbytes_per_sec": 0, 00:12:45.680 "w_mbytes_per_sec": 0 00:12:45.680 }, 00:12:45.680 "claimed": false, 00:12:45.680 "zoned": false, 00:12:45.680 "supported_io_types": { 00:12:45.680 "read": true, 00:12:45.680 "write": true, 00:12:45.680 "unmap": true, 00:12:45.680 "flush": false, 00:12:45.680 "reset": true, 00:12:45.680 "nvme_admin": false, 00:12:45.680 "nvme_io": false, 00:12:45.680 "nvme_io_md": false, 00:12:45.680 "write_zeroes": true, 00:12:45.680 "zcopy": false, 00:12:45.680 "get_zone_info": false, 00:12:45.680 "zone_management": false, 00:12:45.680 "zone_append": false, 00:12:45.680 "compare": false, 00:12:45.680 "compare_and_write": false, 00:12:45.680 "abort": false, 00:12:45.680 "seek_hole": true, 00:12:45.680 "seek_data": true, 00:12:45.680 "copy": false, 00:12:45.680 "nvme_iov_md": false 00:12:45.680 }, 00:12:45.680 "driver_specific": { 00:12:45.680 "lvol": { 00:12:45.680 "lvol_store_uuid": "5bee0f1f-bac6-4db7-be32-a3dae289a7c5", 00:12:45.680 "base_bdev": "aio_bdev", 00:12:45.680 "thin_provision": false, 00:12:45.680 "num_allocated_clusters": 38, 00:12:45.680 "snapshot": false, 00:12:45.680 "clone": false, 00:12:45.680 "esnap_clone": false 00:12:45.680 } 00:12:45.680 } 00:12:45.680 } 00:12:45.680 ] 00:12:45.938 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:45.938 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:45.938 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:46.197 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:46.197 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:46.197 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:46.197 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:46.197 00:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9d7a00a5-b734-4722-9056-103a6a5bd8dc 00:12:46.455 00:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bee0f1f-bac6-4db7-be32-a3dae289a7c5 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:47.055 00:12:47.055 real 0m19.170s 00:12:47.055 user 0m48.345s 00:12:47.055 sys 0m5.122s 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:47.055 ************************************ 00:12:47.055 END TEST lvs_grow_dirty 00:12:47.055 ************************************ 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:47.055 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:47.055 nvmf_trace.0 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.315 rmmod nvme_tcp 00:12:47.315 rmmod nvme_fabrics 00:12:47.315 rmmod nvme_keyring 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2609187 ']' 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2609187 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2609187 ']' 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2609187 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2609187 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2609187' 00:12:47.315 killing process with pid 2609187 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2609187 00:12:47.315 00:49:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2609187 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.574 00:49:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.474 00:49:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.474 00:12:49.474 real 0m42.615s 00:12:49.474 user 1m11.202s 00:12:49.474 sys 0m8.905s 00:12:49.474 00:49:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.474 00:49:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:49.474 ************************************ 00:12:49.474 END TEST nvmf_lvs_grow 00:12:49.474 ************************************ 00:12:49.733 00:49:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:49.733 00:49:24 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:49.733 00:49:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.733 00:49:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.733 00:49:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.733 ************************************ 00:12:49.733 START TEST nvmf_bdev_io_wait 00:12:49.733 ************************************ 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:49.733 * Looking for test storage... 00:12:49.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.733 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.734 00:49:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:12:51.639 00:12:51.639 --- 10.0.0.2 ping statistics --- 00:12:51.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.639 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:12:51.639 00:12:51.639 --- 10.0.0.1 ping statistics --- 00:12:51.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.639 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.639 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.640 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.640 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2611613 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2611613 00:12:51.900 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2611613 ']' 00:12:51.901 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.901 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.901 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.901 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.901 00:49:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.901 [2024-07-16 00:49:26.469574] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:51.901 [2024-07-16 00:49:26.469660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.901 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.901 [2024-07-16 00:49:26.541148] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.901 [2024-07-16 00:49:26.652903] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.901 [2024-07-16 00:49:26.652980] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.901 [2024-07-16 00:49:26.652994] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.901 [2024-07-16 00:49:26.653004] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.901 [2024-07-16 00:49:26.653014] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.901 [2024-07-16 00:49:26.653076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.901 [2024-07-16 00:49:26.653140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.901 [2024-07-16 00:49:26.653204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.901 [2024-07-16 00:49:26.653207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 [2024-07-16 00:49:27.538992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 Malloc0 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.838 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.097 [2024-07-16 00:49:27.601373] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2611863 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2611865 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.097 { 00:12:53.097 "params": { 00:12:53.097 "name": "Nvme$subsystem", 00:12:53.097 "trtype": "$TEST_TRANSPORT", 00:12:53.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.097 "adrfam": "ipv4", 00:12:53.097 "trsvcid": "$NVMF_PORT", 00:12:53.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.097 "hdgst": ${hdgst:-false}, 00:12:53.097 "ddgst": ${ddgst:-false} 00:12:53.097 }, 00:12:53.097 "method": "bdev_nvme_attach_controller" 00:12:53.097 } 00:12:53.097 EOF 00:12:53.097 )") 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2611867 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.097 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.097 { 00:12:53.097 "params": { 00:12:53.097 "name": "Nvme$subsystem", 00:12:53.097 "trtype": "$TEST_TRANSPORT", 00:12:53.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.097 "adrfam": "ipv4", 00:12:53.097 "trsvcid": "$NVMF_PORT", 00:12:53.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.098 "hdgst": ${hdgst:-false}, 00:12:53.098 "ddgst": ${ddgst:-false} 00:12:53.098 }, 00:12:53.098 "method": "bdev_nvme_attach_controller" 00:12:53.098 } 00:12:53.098 EOF 00:12:53.098 )") 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2611870 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.098 { 00:12:53.098 "params": { 00:12:53.098 "name": "Nvme$subsystem", 00:12:53.098 "trtype": "$TEST_TRANSPORT", 00:12:53.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.098 "adrfam": "ipv4", 00:12:53.098 "trsvcid": "$NVMF_PORT", 00:12:53.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.098 "hdgst": ${hdgst:-false}, 00:12:53.098 "ddgst": ${ddgst:-false} 00:12:53.098 }, 00:12:53.098 "method": "bdev_nvme_attach_controller" 00:12:53.098 } 00:12:53.098 EOF 00:12:53.098 )") 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.098 { 00:12:53.098 "params": { 00:12:53.098 "name": "Nvme$subsystem", 00:12:53.098 "trtype": "$TEST_TRANSPORT", 00:12:53.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.098 "adrfam": "ipv4", 00:12:53.098 "trsvcid": "$NVMF_PORT", 00:12:53.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.098 "hdgst": ${hdgst:-false}, 00:12:53.098 "ddgst": ${ddgst:-false} 00:12:53.098 }, 00:12:53.098 "method": "bdev_nvme_attach_controller" 00:12:53.098 } 00:12:53.098 EOF 00:12:53.098 )") 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2611863 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.098 "params": { 00:12:53.098 "name": "Nvme1", 00:12:53.098 "trtype": "tcp", 00:12:53.098 "traddr": "10.0.0.2", 00:12:53.098 "adrfam": "ipv4", 00:12:53.098 "trsvcid": "4420", 00:12:53.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.098 "hdgst": false, 00:12:53.098 "ddgst": false 00:12:53.098 }, 00:12:53.098 "method": "bdev_nvme_attach_controller" 00:12:53.098 }' 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.098 "params": { 00:12:53.098 "name": "Nvme1", 00:12:53.098 "trtype": "tcp", 00:12:53.098 "traddr": "10.0.0.2", 00:12:53.098 "adrfam": "ipv4", 00:12:53.098 "trsvcid": "4420", 00:12:53.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.098 "hdgst": false, 00:12:53.098 "ddgst": false 00:12:53.098 }, 00:12:53.098 "method": "bdev_nvme_attach_controller" 00:12:53.098 }' 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.098 "params": { 00:12:53.098 "name": "Nvme1", 00:12:53.098 "trtype": "tcp", 00:12:53.098 "traddr": "10.0.0.2", 00:12:53.098 "adrfam": "ipv4", 00:12:53.098 "trsvcid": "4420", 00:12:53.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.098 "hdgst": false, 00:12:53.098 "ddgst": false 00:12:53.098 }, 00:12:53.098 "method": "bdev_nvme_attach_controller" 00:12:53.098 }' 00:12:53.098 00:49:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.098 "params": { 00:12:53.098 "name": "Nvme1", 00:12:53.098 "trtype": "tcp", 00:12:53.098 "traddr": "10.0.0.2", 00:12:53.098 "adrfam": "ipv4", 00:12:53.098 "trsvcid": "4420", 00:12:53.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.098 "hdgst": false, 00:12:53.098 "ddgst": false 00:12:53.098 }, 00:12:53.098 "method": "bdev_nvme_attach_controller" 00:12:53.098 }' 00:12:53.098 [2024-07-16 00:49:27.648198] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:53.098 [2024-07-16 00:49:27.648198] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:53.098 [2024-07-16 00:49:27.648317] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 00:49:27.648317] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:53.098 --proc-type=auto ] 00:12:53.098 [2024-07-16 00:49:27.649136] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:53.098 [2024-07-16 00:49:27.649136] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:12:53.098 [2024-07-16 00:49:27.649227] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 00:49:27.649227] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:53.098 --proc-type=auto ] 00:12:53.098 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.098 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.098 [2024-07-16 00:49:27.822696] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.357 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.357 [2024-07-16 00:49:27.922016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:53.357 [2024-07-16 00:49:27.924740] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.357 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.357 [2024-07-16 00:49:27.999789] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.357 [2024-07-16 00:49:28.023161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:53.357 [2024-07-16 00:49:28.091506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:53.357 [2024-07-16 00:49:28.100091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.615 [2024-07-16 00:49:28.201656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:53.615 Running I/O for 1 seconds... 00:12:53.873 Running I/O for 1 seconds... 00:12:53.873 Running I/O for 1 seconds... 00:12:54.810 00:12:54.810 Latency(us) 00:12:54.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.810 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:54.810 Nvme1n1 : 1.01 11849.92 46.29 0.00 0.00 10761.94 6116.69 21068.61 00:12:54.810 =================================================================================================================== 00:12:54.810 Total : 11849.92 46.29 0.00 0.00 10761.94 6116.69 21068.61 00:12:54.810 00:12:54.810 Latency(us) 00:12:54.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.810 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:54.810 Nvme1n1 : 1.01 9957.48 38.90 0.00 0.00 12744.48 6140.97 21942.42 00:12:54.810 =================================================================================================================== 00:12:54.810 Total : 9957.48 38.90 0.00 0.00 12744.48 6140.97 21942.42 00:12:54.810 Running I/O for 1 seconds... 00:12:54.810 00:49:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2611865 00:12:55.068 00:12:55.068 Latency(us) 00:12:55.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.068 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:55.068 Nvme1n1 : 1.01 3763.50 14.70 0.00 0.00 33873.13 7718.68 664874.86 00:12:55.068 =================================================================================================================== 00:12:55.068 Total : 3763.50 14.70 0.00 0.00 33873.13 7718.68 664874.86 00:12:55.327 00:49:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2611867 00:12:55.893 00:12:55.893 Latency(us) 00:12:55.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.893 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:55.893 Nvme1n1 : 1.00 198804.61 776.58 0.00 0.00 641.27 271.55 1250.04 00:12:55.893 =================================================================================================================== 00:12:55.893 Total : 198804.61 776.58 0.00 0.00 641.27 271.55 1250.04 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2611870 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.153 rmmod nvme_tcp 00:12:56.153 rmmod nvme_fabrics 00:12:56.153 rmmod nvme_keyring 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2611613 ']' 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2611613 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2611613 ']' 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2611613 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2611613 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2611613' 00:12:56.153 killing process with pid 2611613 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2611613 00:12:56.153 00:49:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2611613 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.412 00:49:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.006 00:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.006 00:12:59.006 real 0m8.917s 00:12:59.006 user 0m24.304s 00:12:59.006 sys 0m3.566s 00:12:59.006 00:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.006 00:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:59.006 ************************************ 00:12:59.006 END TEST nvmf_bdev_io_wait 00:12:59.006 ************************************ 00:12:59.006 00:49:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:59.006 00:49:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:59.006 00:49:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.006 00:49:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.006 00:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.006 ************************************ 00:12:59.006 START TEST nvmf_queue_depth 00:12:59.006 ************************************ 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:59.006 * Looking for test storage... 00:12:59.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:59.006 00:49:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:00.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:00.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:00.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:00.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:13:00.911 00:13:00.911 --- 10.0.0.2 ping statistics --- 00:13:00.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.911 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:13:00.911 00:13:00.911 --- 10.0.0.1 ping statistics --- 00:13:00.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.911 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2614105 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2614105 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2614105 ']' 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.911 00:49:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.912 [2024-07-16 00:49:35.361369] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:13:00.912 [2024-07-16 00:49:35.361452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.912 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.912 [2024-07-16 00:49:35.428911] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.912 [2024-07-16 00:49:35.548894] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.912 [2024-07-16 00:49:35.548961] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.912 [2024-07-16 00:49:35.548975] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.912 [2024-07-16 00:49:35.548987] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.912 [2024-07-16 00:49:35.548998] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.912 [2024-07-16 00:49:35.549025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 [2024-07-16 00:49:36.325278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 Malloc0 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 [2024-07-16 00:49:36.387670] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2614255 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2614255 /var/tmp/bdevperf.sock 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2614255 ']' 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:01.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.848 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 [2024-07-16 00:49:36.435243] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:13:01.848 [2024-07-16 00:49:36.435325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2614255 ] 00:13:01.848 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.848 [2024-07-16 00:49:36.498428] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.109 [2024-07-16 00:49:36.617107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.109 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.109 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:02.109 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:02.109 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.109 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:02.375 NVMe0n1 00:13:02.375 00:49:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.375 00:49:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:02.375 Running I/O for 10 seconds... 00:13:12.405 00:13:12.405 Latency(us) 00:13:12.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.405 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:12.405 Verification LBA range: start 0x0 length 0x4000 00:13:12.405 NVMe0n1 : 10.09 8473.36 33.10 0.00 0.00 120230.73 24175.50 77672.30 00:13:12.405 =================================================================================================================== 00:13:12.405 Total : 8473.36 33.10 0.00 0.00 120230.73 24175.50 77672.30 00:13:12.405 0 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2614255 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2614255 ']' 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2614255 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2614255 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2614255' 00:13:12.665 killing process with pid 2614255 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2614255 00:13:12.665 Received shutdown signal, test time was about 10.000000 seconds 00:13:12.665 00:13:12.665 Latency(us) 00:13:12.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.665 =================================================================================================================== 00:13:12.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.665 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2614255 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.926 rmmod nvme_tcp 00:13:12.926 rmmod nvme_fabrics 00:13:12.926 rmmod nvme_keyring 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2614105 ']' 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2614105 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2614105 ']' 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2614105 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2614105 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2614105' 00:13:12.926 killing process with pid 2614105 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2614105 00:13:12.926 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2614105 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.185 00:49:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.725 00:49:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.725 00:13:15.725 real 0m16.700s 00:13:15.725 user 0m23.639s 00:13:15.725 sys 0m2.946s 00:13:15.725 00:49:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.725 00:49:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:15.725 ************************************ 00:13:15.725 END TEST nvmf_queue_depth 00:13:15.725 ************************************ 00:13:15.725 00:49:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.725 00:49:49 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:15.725 00:49:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.725 00:49:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.725 00:49:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.725 ************************************ 00:13:15.725 START TEST nvmf_target_multipath 00:13:15.725 ************************************ 00:13:15.725 00:49:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:15.725 * Looking for test storage... 00:13:15.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.725 00:49:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.636 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.637 00:49:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:13:17.637 00:13:17.637 --- 10.0.0.2 ping statistics --- 00:13:17.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.637 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:13:17.637 00:13:17.637 --- 10.0.0.1 ping statistics --- 00:13:17.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.637 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:17.637 only one NIC for nvmf test 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:17.637 rmmod nvme_tcp 00:13:17.637 rmmod nvme_fabrics 00:13:17.637 rmmod nvme_keyring 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.637 00:49:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.540 00:13:19.540 real 0m4.241s 00:13:19.540 user 0m0.779s 00:13:19.540 sys 0m1.454s 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.540 00:49:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:19.540 ************************************ 00:13:19.540 END TEST nvmf_target_multipath 00:13:19.540 ************************************ 00:13:19.540 00:49:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:19.540 00:49:54 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:19.540 00:49:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.540 00:49:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.540 00:49:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.540 ************************************ 00:13:19.540 START TEST nvmf_zcopy 00:13:19.540 ************************************ 00:13:19.540 00:49:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:19.799 * Looking for test storage... 00:13:19.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.799 00:49:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.800 00:49:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:21.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:21.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:21.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:21.701 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:21.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:13:21.702 00:13:21.702 --- 10.0.0.2 ping statistics --- 00:13:21.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.702 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:13:21.702 00:13:21.702 --- 10.0.0.1 ping statistics --- 00:13:21.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.702 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2619424 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2619424 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2619424 ']' 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.702 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.960 [2024-07-16 00:49:56.486328] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:13:21.960 [2024-07-16 00:49:56.486401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.960 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.960 [2024-07-16 00:49:56.559648] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.960 [2024-07-16 00:49:56.679126] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.960 [2024-07-16 00:49:56.679193] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.960 [2024-07-16 00:49:56.679209] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.960 [2024-07-16 00:49:56.679222] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.960 [2024-07-16 00:49:56.679233] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.960 [2024-07-16 00:49:56.679263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.218 [2024-07-16 00:49:56.834703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.218 [2024-07-16 00:49:56.850930] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.218 malloc0 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:22.218 { 00:13:22.218 "params": { 00:13:22.218 "name": "Nvme$subsystem", 00:13:22.218 "trtype": "$TEST_TRANSPORT", 00:13:22.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.218 "adrfam": "ipv4", 00:13:22.218 "trsvcid": "$NVMF_PORT", 00:13:22.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.218 "hdgst": ${hdgst:-false}, 00:13:22.218 "ddgst": ${ddgst:-false} 00:13:22.218 }, 00:13:22.218 "method": "bdev_nvme_attach_controller" 00:13:22.218 } 00:13:22.218 EOF 00:13:22.218 )") 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:22.218 00:49:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:22.218 "params": { 00:13:22.218 "name": "Nvme1", 00:13:22.218 "trtype": "tcp", 00:13:22.218 "traddr": "10.0.0.2", 00:13:22.218 "adrfam": "ipv4", 00:13:22.218 "trsvcid": "4420", 00:13:22.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.218 "hdgst": false, 00:13:22.218 "ddgst": false 00:13:22.218 }, 00:13:22.218 "method": "bdev_nvme_attach_controller" 00:13:22.218 }' 00:13:22.218 [2024-07-16 00:49:56.936282] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:13:22.218 [2024-07-16 00:49:56.936370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2619449 ] 00:13:22.218 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.475 [2024-07-16 00:49:56.999650] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.475 [2024-07-16 00:49:57.124409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.733 Running I/O for 10 seconds... 00:13:32.777 00:13:32.777 Latency(us) 00:13:32.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.777 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:32.777 Verification LBA range: start 0x0 length 0x1000 00:13:32.777 Nvme1n1 : 10.02 5976.29 46.69 0.00 0.00 21359.10 3155.44 34175.81 00:13:32.777 =================================================================================================================== 00:13:32.777 Total : 5976.29 46.69 0.00 0.00 21359.10 3155.44 34175.81 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2620783 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:33.035 { 00:13:33.035 "params": { 00:13:33.035 "name": "Nvme$subsystem", 00:13:33.035 "trtype": "$TEST_TRANSPORT", 00:13:33.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:33.035 "adrfam": "ipv4", 00:13:33.035 "trsvcid": "$NVMF_PORT", 00:13:33.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:33.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:33.035 "hdgst": ${hdgst:-false}, 00:13:33.035 "ddgst": ${ddgst:-false} 00:13:33.035 }, 00:13:33.035 "method": "bdev_nvme_attach_controller" 00:13:33.035 } 00:13:33.035 EOF 00:13:33.035 )") 00:13:33.035 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:33.035 [2024-07-16 00:50:07.734087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.734131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.036 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:33.036 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:33.036 00:50:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:33.036 "params": { 00:13:33.036 "name": "Nvme1", 00:13:33.036 "trtype": "tcp", 00:13:33.036 "traddr": "10.0.0.2", 00:13:33.036 "adrfam": "ipv4", 00:13:33.036 "trsvcid": "4420", 00:13:33.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:33.036 "hdgst": false, 00:13:33.036 "ddgst": false 00:13:33.036 }, 00:13:33.036 "method": "bdev_nvme_attach_controller" 00:13:33.036 }' 00:13:33.036 [2024-07-16 00:50:07.742057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.742082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.036 [2024-07-16 00:50:07.750078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.750101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.036 [2024-07-16 00:50:07.758095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.758117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.036 [2024-07-16 00:50:07.766115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.766138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.036 [2024-07-16 00:50:07.771131] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:13:33.036 [2024-07-16 00:50:07.771226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2620783 ] 00:13:33.036 [2024-07-16 00:50:07.774136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.774174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.036 [2024-07-16 00:50:07.782172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.782192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.036 [2024-07-16 00:50:07.790217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.036 [2024-07-16 00:50:07.790238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.798227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.798248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.294 [2024-07-16 00:50:07.806250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.806272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.814266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.814295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.822278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.822299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.830299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.830320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.831570] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.294 [2024-07-16 00:50:07.838331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.838361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.846356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.846391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.854350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.854372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.862372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.862393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.870394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.870415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.878416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.878437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.886437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.886459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.894478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.894506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.902512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.902549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.910504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.910525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.918526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.918547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.926547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.294 [2024-07-16 00:50:07.926569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.294 [2024-07-16 00:50:07.934569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.934590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.942607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.942634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.949078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.295 [2024-07-16 00:50:07.950629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.950654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.958649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.958681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.966691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.966727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.974718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.974756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.982740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.982779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.990763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.990802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:07.998788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:07.998831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:08.006801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:08.006842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:08.014827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:08.014867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:08.022829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:08.022855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:08.030869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:08.030929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:08.038897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:08.038949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.295 [2024-07-16 00:50:08.046924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.295 [2024-07-16 00:50:08.046960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.054939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.054961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.062958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.062981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.070987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.071029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.079001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.079025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.087042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.087081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.095045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.095069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.103078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.103100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.111099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.111130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.119124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.119146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.127148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.127189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.135193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.135219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.143220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.143248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.151240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.151267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.159262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.159287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.167270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.167295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.175293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.175318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.183316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.183341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.191344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.191373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.199361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.199387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.207381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.207407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.215403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.215427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.223426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.223451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.231454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.231479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.239476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.239503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.247498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.247524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.255526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.255554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.263548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.263573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.271572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.271597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.279596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.279623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.288942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.288970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 [2024-07-16 00:50:08.295648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.295676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.555 Running I/O for 5 seconds... 00:13:33.555 [2024-07-16 00:50:08.303664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.555 [2024-07-16 00:50:08.303690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.317863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.317923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.328379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.328412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.340601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.340633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.351349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.351381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.363184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.363217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.374825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.374857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.385947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.385977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.397793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.397825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.409387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.409419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.422601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.422632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.433173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.433204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.445027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.445055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.456137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.456182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.467449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.467480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.478712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.478743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.489949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.489977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.501374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.501405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.512427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.512457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.523325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.523355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.536206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.814 [2024-07-16 00:50:08.536239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.814 [2024-07-16 00:50:08.546100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.815 [2024-07-16 00:50:08.546128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.815 [2024-07-16 00:50:08.557823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.815 [2024-07-16 00:50:08.557855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.815 [2024-07-16 00:50:08.569051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.815 [2024-07-16 00:50:08.569078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.580578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.580609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.591916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.591961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.603538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.603571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.614886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.614937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.626344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.626375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.637594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.637625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.648956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.648984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.660453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.660484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.671939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.671980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.683529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.683559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.695225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.695256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.707249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.707280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.718758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.718788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.730017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.730046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.741517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.741548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.752386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.752417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.763826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.763857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.774971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.775000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.786231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.786262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.797328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.797359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.808626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.808657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.819606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.819636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.074 [2024-07-16 00:50:08.831227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.074 [2024-07-16 00:50:08.831258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.333 [2024-07-16 00:50:08.842761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.333 [2024-07-16 00:50:08.842794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.333 [2024-07-16 00:50:08.853748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.333 [2024-07-16 00:50:08.853779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.333 [2024-07-16 00:50:08.865225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.333 [2024-07-16 00:50:08.865257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.333 [2024-07-16 00:50:08.876085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.876113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.887354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.887394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.898840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.898871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.910118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.910146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.921400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.921431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.932596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.932627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.943722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.943750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.955073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.955101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.965575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.965603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.975435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.975463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.985566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.985594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:08.996419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:08.996447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.006745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.006772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.017236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.017264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.029483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.029511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.038352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.038380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.049442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.049471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.059680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.059708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.069437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.069465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.080522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.080550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.334 [2024-07-16 00:50:09.090286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.334 [2024-07-16 00:50:09.090324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.101142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.101171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.110996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.111024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.120966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.120994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.130897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.130925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.141032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.141060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.151111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.151138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.161040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.161067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.171186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.171213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.181149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.181176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.191575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.191602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.201759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.201787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.211676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.211704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.221784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.221812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.232129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.232156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.242457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.242484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.252808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.252836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.263056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.263084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.273401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.273430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.283664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.283702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.293966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.293993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.303970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.303997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.314268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.314295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.324695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.324723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.337140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.337168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.594 [2024-07-16 00:50:09.346365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.594 [2024-07-16 00:50:09.346393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.358685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.358714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.369972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.370000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.379223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.379250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.389987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.390015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.401852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.401888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.410959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.410987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.421865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.421901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.431905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.431932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.441943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.441971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.452006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.452034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.463193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.463224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.474677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.474708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.485697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.485738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.497107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.497135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.508194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.508225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.519610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.519641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.531138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.531189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.542527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.542559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.553401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.553442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.565230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.853 [2024-07-16 00:50:09.565261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.853 [2024-07-16 00:50:09.576871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.854 [2024-07-16 00:50:09.576912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.854 [2024-07-16 00:50:09.589055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.854 [2024-07-16 00:50:09.589084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.854 [2024-07-16 00:50:09.602259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.854 [2024-07-16 00:50:09.602289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.612314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.612345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.624629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.624660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.635176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.635206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.646560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.646601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.657609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.657638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.669091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.669119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.680487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.680518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.691609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.691640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.701846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.701885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.713934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.713962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.725039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.725068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.736205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.736235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.747096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.747124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.757947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.757976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.768897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.768940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.781447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.781475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.791150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.791195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.802731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.802760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.815064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.815093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.824838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.824867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.836328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.836358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.847141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.847185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.858351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.858381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.112 [2024-07-16 00:50:09.869548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.112 [2024-07-16 00:50:09.869577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.881008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.881037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.892277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.892306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.903301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.903331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.914919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.914947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.925787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.925816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.936890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.936934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.947958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.947986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.958531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.958560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.971106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.971134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.980865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.980902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:09.991923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:09.991951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.002357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.002386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.012700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.012733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.023430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.023461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.033907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.033935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.045999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.046028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.055550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.055579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.066428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.066456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.077194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.077223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.087927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.087955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.098934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.098962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.110053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.110081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.373 [2024-07-16 00:50:10.121088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.373 [2024-07-16 00:50:10.121116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.133634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.133663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.143341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.143369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.154488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.154517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.164303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.164333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.175941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.175969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.187045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.187074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.197821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.197853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.209005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.209034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.222178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.222210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.232116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.232144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.243983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.244012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.255546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.255577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.634 [2024-07-16 00:50:10.266562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.634 [2024-07-16 00:50:10.266593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.276921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.276948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.288730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.288761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.300190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.300223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.310738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.310770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.322006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.322044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.333047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.333075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.344260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.344292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.357258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.357290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.367295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.367326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.379145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.379195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.635 [2024-07-16 00:50:10.389472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.635 [2024-07-16 00:50:10.389503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.401275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.401307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.412682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.412715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.423725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.423756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.435146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.435193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.446730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.446762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.457754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.457783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.469405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.469437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.480200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.480229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.490670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.490698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.500893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.500929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.513204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.513232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.522385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.522413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.533330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.533368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.545192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.545220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.554440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.554468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.565346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.565374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.576017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.576045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.586116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.586144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.596387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.596416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.606953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.606981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.617257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.617285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.629744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.629772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.639047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.639075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.895 [2024-07-16 00:50:10.651616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.895 [2024-07-16 00:50:10.651644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.155 [2024-07-16 00:50:10.663095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.663123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.671961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.671989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.684297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.684325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.694496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.694525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.705314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.705342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.717340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.717369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.727174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.727203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.738453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.738488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.750461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.750489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.760084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.760113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.770846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.770874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.781501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.781529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.794013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.794041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.804060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.804088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.814888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.814916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.825118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.825147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.835250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.835278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.845591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.845620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.855660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.855688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.865850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.865885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.878306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.878333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.888936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.888964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.897691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.897718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.156 [2024-07-16 00:50:10.908311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.156 [2024-07-16 00:50:10.908339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.918835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.918863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.928943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.928971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.939057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.939095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.949114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.949142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.959285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.959314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.969450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.969478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.980297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.980328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:10.990530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:10.990560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.001893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.001944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.013000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.013029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.024104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.024132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.037597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.037626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.048254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.048284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.059648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.059678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.071028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.071059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.081857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.081897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.092823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.092853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.103571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.103601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.114900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.114944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.125893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.125937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.137020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.137048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.148230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.148271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.159264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.159295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.416 [2024-07-16 00:50:11.170664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.416 [2024-07-16 00:50:11.170695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.181898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.181943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.193519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.193551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.204613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.204644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.215792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.215823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.226581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.226612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.240137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.240181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.250886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.250931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.261947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.261976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.273025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.273054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.284526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.284558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.296031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.296060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.307286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.307317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.318518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.318549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.329775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.329806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.340795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.340826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.352368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.352401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.363768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.363799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.374746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.374776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.386172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.386203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.397606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.397637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.408794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.408825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.420138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.420182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.677 [2024-07-16 00:50:11.431410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.677 [2024-07-16 00:50:11.431440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.937 [2024-07-16 00:50:11.442406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.937 [2024-07-16 00:50:11.442437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.937 [2024-07-16 00:50:11.453935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.937 [2024-07-16 00:50:11.453964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.465167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.465197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.478099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.478128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.488072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.488100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.500090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.500118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.511244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.511276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.522487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.522519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.533978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.534007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.545164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.545195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.556859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.556897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.568006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.568035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.579135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.579180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.590742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.590773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.602019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.602048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.613426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.613458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.626330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.626361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.636689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.636719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.648622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.648652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.660024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.660051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.671371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.671402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.683119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.683147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.938 [2024-07-16 00:50:11.694390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.938 [2024-07-16 00:50:11.694420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.705743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.705774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.716822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.716853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.727814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.727845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.739185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.739216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.750484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.750515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.761620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.761652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.774544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.774574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.784389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.784420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.796307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.796338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.807070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.807098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.817396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.817426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.828592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.828623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.839712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.200 [2024-07-16 00:50:11.839743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.200 [2024-07-16 00:50:11.850963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.850993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.862110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.862138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.873425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.873457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.884840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.884872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.896590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.896630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.908331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.908362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.919408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.919450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.930979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.931008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.942175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.942207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.201 [2024-07-16 00:50:11.953396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.201 [2024-07-16 00:50:11.953436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:11.964696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:11.964726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:11.975959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:11.975988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:11.988687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:11.988718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:11.997731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:11.997759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.010513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.010541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.020387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.020414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.031544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.031571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.041981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.042009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.052331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.052359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.062480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.062508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.072640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.072668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.082833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.082862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.093099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.093128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.103829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.460 [2024-07-16 00:50:12.103857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.460 [2024-07-16 00:50:12.116253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.116282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.126144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.126172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.136312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.136340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.146732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.146761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.158827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.158855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.168342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.168370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.179168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.179195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.189260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.189287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.199433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.199471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.461 [2024-07-16 00:50:12.210102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.461 [2024-07-16 00:50:12.210131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.220564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.220593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.232804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.232832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.242128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.242157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.253035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.253063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.264956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.264983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.274470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.274497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.285124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.285151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.297353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.297381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.306790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.306818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.317661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.317700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.328196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.328225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.338580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.338608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.348919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.348948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.359498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.359526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.369803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.369830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.380260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.380287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.390623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.390651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.402801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.402840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.411704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.411732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.422663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.422690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.433131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.433158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.443573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.443601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.455743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.455772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.465016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.465043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.722 [2024-07-16 00:50:12.476073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.722 [2024-07-16 00:50:12.476101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.493503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.493534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.504169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.504199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.515107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.515135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.525953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.525981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.537080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.537109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.548118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.548146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.559320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.559351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.572021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.572049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.582309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.582338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.593850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.593888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.605105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.605133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.615833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.615874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.627128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.627173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.638477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.638507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.649685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.649717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.661321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.661352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.672791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.672822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.683652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.683683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.694520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.694551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.706110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.706139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.716961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.716990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.727721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.727751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.984 [2024-07-16 00:50:12.738846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.984 [2024-07-16 00:50:12.738884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.749964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.749993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.761171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.761201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.772226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.772256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.782735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.782765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.793549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.793578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.804467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.804496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.815490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.815519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.826252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.826291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.839313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.839343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.848970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.848998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.860057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.860086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.870862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.870915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.881552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.881582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.892215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.892244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.902972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.903001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.914044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.914074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.925042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.925072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.936948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.936976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.948478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.948508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.961371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.961402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.971436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.971467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.983339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.983369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.245 [2024-07-16 00:50:12.994348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.245 [2024-07-16 00:50:12.994379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.505 [2024-07-16 00:50:13.005401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.505 [2024-07-16 00:50:13.005433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.505 [2024-07-16 00:50:13.016533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.016563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.027604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.027638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.038669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.038699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.051591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.051623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.061571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.061603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.073441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.073472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.084858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.084899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.095868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.095936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.106699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.106729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.118838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.118870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.130136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.130181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.143090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.143119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.152642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.152672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.164272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.164304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.175534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.175566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.186483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.186514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.197699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.197730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.208321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.208353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.220294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.220325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.231622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.231652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.242690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.242720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.506 [2024-07-16 00:50:13.253895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.506 [2024-07-16 00:50:13.253938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.266748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.266779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.276773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.276803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.288455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.288485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.299980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.300009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.311023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.311051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.320549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.320580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 00:13:38.764 Latency(us) 00:13:38.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.764 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:38.764 Nvme1n1 : 5.01 11656.84 91.07 0.00 0.00 10966.51 4563.25 23981.32 00:13:38.764 =================================================================================================================== 00:13:38.764 Total : 11656.84 91.07 0.00 0.00 10966.51 4563.25 23981.32 00:13:38.764 [2024-07-16 00:50:13.325736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.325766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.333756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.333785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.341775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.341802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.349838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.349894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.357868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.357943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.365897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.365953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.373935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.373980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.381950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.382001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.389978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.390024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.398002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.398044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.406029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.406078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.414035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.414083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.422046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.422095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.430063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.430110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.438084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.438129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.446103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.446149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.454122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.454164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.462119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.462155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.470123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.470144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.764 [2024-07-16 00:50:13.478146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.764 [2024-07-16 00:50:13.478189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.765 [2024-07-16 00:50:13.486185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.765 [2024-07-16 00:50:13.486211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.765 [2024-07-16 00:50:13.494203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.765 [2024-07-16 00:50:13.494239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.765 [2024-07-16 00:50:13.502268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.765 [2024-07-16 00:50:13.502315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.765 [2024-07-16 00:50:13.510292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.765 [2024-07-16 00:50:13.510337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.765 [2024-07-16 00:50:13.518311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.765 [2024-07-16 00:50:13.518351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.526293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.526320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.534315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.534340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.542336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.542375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.550345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.550367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.558434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.558482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.566438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.566482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.574426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.574451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.582445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.582470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 [2024-07-16 00:50:13.590467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.022 [2024-07-16 00:50:13.590492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2620783) - No such process 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2620783 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:39.022 delay0 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.022 00:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:39.022 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.022 [2024-07-16 00:50:13.670807] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:45.591 Initializing NVMe Controllers 00:13:45.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:45.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:45.591 Initialization complete. Launching workers. 00:13:45.591 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:13:45.591 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 33 00:13:45.591 success 152, unsuccess 214, failed 0 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.591 rmmod nvme_tcp 00:13:45.591 rmmod nvme_fabrics 00:13:45.591 rmmod nvme_keyring 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2619424 ']' 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2619424 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2619424 ']' 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2619424 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2619424 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2619424' 00:13:45.591 killing process with pid 2619424 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2619424 00:13:45.591 00:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2619424 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.591 00:50:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.498 00:50:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:47.498 00:13:47.498 real 0m27.935s 00:13:47.498 user 0m41.353s 00:13:47.498 sys 0m8.340s 00:13:47.498 00:50:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.498 00:50:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:47.498 ************************************ 00:13:47.498 END TEST nvmf_zcopy 00:13:47.498 ************************************ 00:13:47.498 00:50:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:47.498 00:50:22 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:47.498 00:50:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:47.498 00:50:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.498 00:50:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:47.757 ************************************ 00:13:47.757 START TEST nvmf_nmic 00:13:47.757 ************************************ 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:47.757 * Looking for test storage... 00:13:47.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.757 00:50:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.758 00:50:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:49.743 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.743 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:49.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:49.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:49.744 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:49.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:13:49.744 00:13:49.744 --- 10.0.0.2 ping statistics --- 00:13:49.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.744 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:13:49.744 00:13:49.744 --- 10.0.0.1 ping statistics --- 00:13:49.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.744 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2624090 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2624090 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2624090 ']' 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.744 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.745 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.745 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.745 [2024-07-16 00:50:24.423209] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:13:49.745 [2024-07-16 00:50:24.423283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.745 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.004 [2024-07-16 00:50:24.503431] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.004 [2024-07-16 00:50:24.643097] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.004 [2024-07-16 00:50:24.643164] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.004 [2024-07-16 00:50:24.643206] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.005 [2024-07-16 00:50:24.643230] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.005 [2024-07-16 00:50:24.643263] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.005 [2024-07-16 00:50:24.643357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.005 [2024-07-16 00:50:24.643553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.005 [2024-07-16 00:50:24.643618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.005 [2024-07-16 00:50:24.643627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 [2024-07-16 00:50:24.821079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 Malloc0 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 [2024-07-16 00:50:24.872378] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:50.264 test case1: single bdev can't be used in multiple subsystems 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.265 [2024-07-16 00:50:24.896249] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:50.265 [2024-07-16 00:50:24.896277] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:50.265 [2024-07-16 00:50:24.896307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.265 request: 00:13:50.265 { 00:13:50.265 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:50.265 "namespace": { 00:13:50.265 "bdev_name": "Malloc0", 00:13:50.265 "no_auto_visible": false 00:13:50.265 }, 00:13:50.265 "method": "nvmf_subsystem_add_ns", 00:13:50.265 "req_id": 1 00:13:50.265 } 00:13:50.265 Got JSON-RPC error response 00:13:50.265 response: 00:13:50.265 { 00:13:50.265 "code": -32602, 00:13:50.265 "message": "Invalid parameters" 00:13:50.265 } 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:50.265 Adding namespace failed - expected result. 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:50.265 test case2: host connect to nvmf target in multiple paths 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:50.265 [2024-07-16 00:50:24.904359] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.265 00:50:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.833 00:50:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:51.403 00:50:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.403 00:50:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.403 00:50:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.403 00:50:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:51.403 00:50:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:53.940 00:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:53.940 00:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:53.940 00:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.940 00:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:53.940 00:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.940 00:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:53.940 00:50:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:53.940 [global] 00:13:53.940 thread=1 00:13:53.940 invalidate=1 00:13:53.940 rw=write 00:13:53.940 time_based=1 00:13:53.940 runtime=1 00:13:53.940 ioengine=libaio 00:13:53.940 direct=1 00:13:53.940 bs=4096 00:13:53.940 iodepth=1 00:13:53.940 norandommap=0 00:13:53.940 numjobs=1 00:13:53.940 00:13:53.940 verify_dump=1 00:13:53.940 verify_backlog=512 00:13:53.940 verify_state_save=0 00:13:53.940 do_verify=1 00:13:53.940 verify=crc32c-intel 00:13:53.940 [job0] 00:13:53.940 filename=/dev/nvme0n1 00:13:53.940 Could not set queue depth (nvme0n1) 00:13:53.940 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.940 fio-3.35 00:13:53.940 Starting 1 thread 00:13:54.876 00:13:54.876 job0: (groupid=0, jobs=1): err= 0: pid=2624670: Tue Jul 16 00:50:29 2024 00:13:54.876 read: IOPS=512, BW=2049KiB/s (2098kB/s)(2100KiB/1025msec) 00:13:54.876 slat (nsec): min=5390, max=34552, avg=7096.21, stdev=4527.67 00:13:54.876 clat (usec): min=374, max=41014, avg=1458.83, stdev=6297.35 00:13:54.876 lat (usec): min=380, max=41037, avg=1465.93, stdev=6300.92 00:13:54.876 clat percentiles (usec): 00:13:54.876 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 416], 00:13:54.876 | 30.00th=[ 433], 40.00th=[ 437], 50.00th=[ 445], 60.00th=[ 453], 00:13:54.876 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 537], 00:13:54.876 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:54.876 | 99.99th=[41157] 00:13:54.876 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:13:54.876 slat (nsec): min=6691, max=38370, avg=12292.53, stdev=6212.13 00:13:54.876 clat (usec): min=190, max=432, avg=232.64, stdev=32.92 00:13:54.876 lat (usec): min=197, max=464, avg=244.93, stdev=37.63 00:13:54.876 clat percentiles (usec): 00:13:54.876 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 210], 00:13:54.876 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 229], 00:13:54.876 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 293], 00:13:54.876 | 99.00th=[ 359], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 433], 00:13:54.876 | 99.99th=[ 433] 00:13:54.876 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:54.876 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:54.876 lat (usec) : 250=51.71%, 500=37.70%, 750=9.75% 00:13:54.876 lat (msec) : 50=0.84% 00:13:54.876 cpu : usr=0.98%, sys=2.34%, ctx=1549, majf=0, minf=2 00:13:54.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.876 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.876 00:13:54.876 Run status group 0 (all jobs): 00:13:54.876 READ: bw=2049KiB/s (2098kB/s), 2049KiB/s-2049KiB/s (2098kB/s-2098kB/s), io=2100KiB (2150kB), run=1025-1025msec 00:13:54.876 WRITE: bw=3996KiB/s (4092kB/s), 3996KiB/s-3996KiB/s (4092kB/s-4092kB/s), io=4096KiB (4194kB), run=1025-1025msec 00:13:54.876 00:13:54.876 Disk stats (read/write): 00:13:54.876 nvme0n1: ios=571/1024, merge=0/0, ticks=634/233, in_queue=867, util=92.38% 00:13:54.876 00:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:54.876 00:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.876 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:54.876 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:54.876 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.876 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:54.876 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:55.133 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.134 rmmod nvme_tcp 00:13:55.134 rmmod nvme_fabrics 00:13:55.134 rmmod nvme_keyring 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2624090 ']' 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2624090 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2624090 ']' 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2624090 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2624090 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2624090' 00:13:55.134 killing process with pid 2624090 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2624090 00:13:55.134 00:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2624090 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.391 00:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.300 00:50:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.300 00:13:57.300 real 0m9.791s 00:13:57.300 user 0m22.167s 00:13:57.300 sys 0m2.266s 00:13:57.300 00:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.300 00:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:57.300 ************************************ 00:13:57.300 END TEST nvmf_nmic 00:13:57.300 ************************************ 00:13:57.558 00:50:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:57.558 00:50:32 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:57.558 00:50:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:57.558 00:50:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.558 00:50:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.558 ************************************ 00:13:57.558 START TEST nvmf_fio_target 00:13:57.558 ************************************ 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:57.558 * Looking for test storage... 00:13:57.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:57.558 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.559 00:50:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:00.089 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:00.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:00.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:00.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:00.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:00.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:14:00.090 00:14:00.090 --- 10.0.0.2 ping statistics --- 00:14:00.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.090 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:14:00.090 00:14:00.090 --- 10.0.0.1 ping statistics --- 00:14:00.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.090 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2626753 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2626753 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2626753 ']' 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.090 [2024-07-16 00:50:34.479460] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:14:00.090 [2024-07-16 00:50:34.479529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.090 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.090 [2024-07-16 00:50:34.542224] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.090 [2024-07-16 00:50:34.654779] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.090 [2024-07-16 00:50:34.654828] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.090 [2024-07-16 00:50:34.654851] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.090 [2024-07-16 00:50:34.654862] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.090 [2024-07-16 00:50:34.654872] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.090 [2024-07-16 00:50:34.654949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.090 [2024-07-16 00:50:34.655009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.090 [2024-07-16 00:50:34.655074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.090 [2024-07-16 00:50:34.655077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.090 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.091 00:50:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 00:50:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.091 00:50:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:00.349 [2024-07-16 00:50:35.100719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.609 00:50:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:00.869 00:50:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:00.869 00:50:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.127 00:50:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:01.127 00:50:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.385 00:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:01.385 00:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.643 00:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:01.643 00:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:01.900 00:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:02.158 00:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:02.158 00:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:02.416 00:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:02.416 00:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:02.674 00:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:02.674 00:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:02.931 00:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.189 00:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:03.189 00:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.446 00:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:03.446 00:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.704 00:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.961 [2024-07-16 00:50:38.538403] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.961 00:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:04.219 00:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:04.478 00:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.057 00:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:05.057 00:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:05.057 00:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.057 00:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:05.057 00:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:05.057 00:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:07.002 00:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:07.002 00:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:07.002 00:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.002 00:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:07.002 00:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.002 00:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:07.003 00:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:07.003 [global] 00:14:07.003 thread=1 00:14:07.003 invalidate=1 00:14:07.003 rw=write 00:14:07.003 time_based=1 00:14:07.003 runtime=1 00:14:07.003 ioengine=libaio 00:14:07.003 direct=1 00:14:07.003 bs=4096 00:14:07.003 iodepth=1 00:14:07.003 norandommap=0 00:14:07.003 numjobs=1 00:14:07.003 00:14:07.003 verify_dump=1 00:14:07.003 verify_backlog=512 00:14:07.003 verify_state_save=0 00:14:07.003 do_verify=1 00:14:07.003 verify=crc32c-intel 00:14:07.003 [job0] 00:14:07.003 filename=/dev/nvme0n1 00:14:07.003 [job1] 00:14:07.003 filename=/dev/nvme0n2 00:14:07.003 [job2] 00:14:07.003 filename=/dev/nvme0n3 00:14:07.003 [job3] 00:14:07.003 filename=/dev/nvme0n4 00:14:07.003 Could not set queue depth (nvme0n1) 00:14:07.003 Could not set queue depth (nvme0n2) 00:14:07.003 Could not set queue depth (nvme0n3) 00:14:07.003 Could not set queue depth (nvme0n4) 00:14:07.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.262 fio-3.35 00:14:07.262 Starting 4 threads 00:14:08.643 00:14:08.643 job0: (groupid=0, jobs=1): err= 0: pid=2627820: Tue Jul 16 00:50:43 2024 00:14:08.643 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:14:08.643 slat (nsec): min=17529, max=39980, avg=24231.16, stdev=8487.71 00:14:08.643 clat (usec): min=40921, max=42558, avg=41688.14, stdev=470.67 00:14:08.643 lat (usec): min=40961, max=42576, avg=41712.37, stdev=471.25 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:08.643 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:14:08.643 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:14:08.643 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:14:08.643 | 99.99th=[42730] 00:14:08.643 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:14:08.643 slat (nsec): min=6803, max=68762, avg=23034.93, stdev=12548.90 00:14:08.643 clat (usec): min=198, max=817, avg=388.92, stdev=148.51 00:14:08.643 lat (usec): min=205, max=843, avg=411.96, stdev=153.82 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 235], 00:14:08.643 | 30.00th=[ 255], 40.00th=[ 306], 50.00th=[ 383], 60.00th=[ 429], 00:14:08.643 | 70.00th=[ 469], 80.00th=[ 523], 90.00th=[ 594], 95.00th=[ 652], 00:14:08.643 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 816], 99.95th=[ 816], 00:14:08.643 | 99.99th=[ 816] 00:14:08.643 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:14:08.643 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:08.643 lat (usec) : 250=27.31%, 500=45.95%, 750=21.85%, 1000=1.32% 00:14:08.643 lat (msec) : 50=3.58% 00:14:08.643 cpu : usr=0.80%, sys=1.39%, ctx=535, majf=0, minf=1 00:14:08.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.643 job1: (groupid=0, jobs=1): err= 0: pid=2627821: Tue Jul 16 00:50:43 2024 00:14:08.643 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:14:08.643 slat (nsec): min=9967, max=33485, avg=23970.09, stdev=8252.98 00:14:08.643 clat (usec): min=521, max=41325, avg=39234.29, stdev=8439.55 00:14:08.643 lat (usec): min=541, max=41343, avg=39258.26, stdev=8440.37 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[ 523], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:08.643 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:08.643 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:08.643 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:08.643 | 99.99th=[41157] 00:14:08.643 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:14:08.643 slat (nsec): min=6090, max=43770, avg=11767.25, stdev=6020.15 00:14:08.643 clat (usec): min=202, max=4092, avg=247.94, stdev=172.95 00:14:08.643 lat (usec): min=212, max=4115, avg=259.71, stdev=173.86 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:14:08.643 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:14:08.643 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 310], 00:14:08.643 | 99.00th=[ 371], 99.50th=[ 375], 99.90th=[ 4080], 99.95th=[ 4080], 00:14:08.643 | 99.99th=[ 4080] 00:14:08.643 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:14:08.643 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:08.643 lat (usec) : 250=71.21%, 500=24.30%, 750=0.19% 00:14:08.643 lat (msec) : 10=0.19%, 50=4.11% 00:14:08.643 cpu : usr=0.19%, sys=0.68%, ctx=535, majf=0, minf=1 00:14:08.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.643 job2: (groupid=0, jobs=1): err= 0: pid=2627823: Tue Jul 16 00:50:43 2024 00:14:08.643 read: IOPS=1098, BW=4396KiB/s (4501kB/s)(4400KiB/1001msec) 00:14:08.643 slat (nsec): min=7934, max=63691, avg=14223.21, stdev=5118.90 00:14:08.643 clat (usec): min=321, max=725, avg=365.49, stdev=30.72 00:14:08.643 lat (usec): min=331, max=742, avg=379.71, stdev=32.49 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 347], 00:14:08.643 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 367], 00:14:08.643 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 404], 00:14:08.643 | 99.00th=[ 506], 99.50th=[ 562], 99.90th=[ 619], 99.95th=[ 725], 00:14:08.643 | 99.99th=[ 725] 00:14:08.643 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:08.643 slat (nsec): min=10127, max=74125, avg=24833.19, stdev=10604.20 00:14:08.643 clat (usec): min=208, max=1250, avg=344.89, stdev=141.38 00:14:08.643 lat (usec): min=219, max=1290, avg=369.73, stdev=144.62 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 251], 00:14:08.643 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 306], 00:14:08.643 | 70.00th=[ 343], 80.00th=[ 420], 90.00th=[ 523], 95.00th=[ 635], 00:14:08.643 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1254], 99.95th=[ 1254], 00:14:08.643 | 99.99th=[ 1254] 00:14:08.643 bw ( KiB/s): min= 5608, max= 5608, per=47.37%, avg=5608.00, stdev= 0.00, samples=1 00:14:08.643 iops : min= 1402, max= 1402, avg=1402.00, stdev= 0.00, samples=1 00:14:08.643 lat (usec) : 250=11.34%, 500=81.22%, 750=6.07%, 1000=1.06% 00:14:08.643 lat (msec) : 2=0.30% 00:14:08.643 cpu : usr=3.70%, sys=7.30%, ctx=2638, majf=0, minf=1 00:14:08.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 issued rwts: total=1100,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.643 job3: (groupid=0, jobs=1): err= 0: pid=2627829: Tue Jul 16 00:50:43 2024 00:14:08.643 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1017msec) 00:14:08.643 slat (nsec): min=17716, max=33820, avg=22252.60, stdev=6845.41 00:14:08.643 clat (usec): min=40762, max=42044, avg=41011.21, stdev=249.23 00:14:08.643 lat (usec): min=40786, max=42062, avg=41033.46, stdev=247.78 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:08.643 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:08.643 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:08.643 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:08.643 | 99.99th=[42206] 00:14:08.643 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:14:08.643 slat (nsec): min=7122, max=73377, avg=20646.20, stdev=10454.03 00:14:08.643 clat (usec): min=207, max=1017, avg=357.10, stdev=153.37 00:14:08.643 lat (usec): min=217, max=1032, avg=377.75, stdev=153.02 00:14:08.643 clat percentiles (usec): 00:14:08.643 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 249], 00:14:08.643 | 30.00th=[ 262], 40.00th=[ 277], 50.00th=[ 322], 60.00th=[ 355], 00:14:08.643 | 70.00th=[ 388], 80.00th=[ 420], 90.00th=[ 469], 95.00th=[ 758], 00:14:08.643 | 99.00th=[ 938], 99.50th=[ 996], 99.90th=[ 1020], 99.95th=[ 1020], 00:14:08.643 | 99.99th=[ 1020] 00:14:08.643 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:14:08.643 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:08.643 lat (usec) : 250=19.74%, 500=68.98%, 750=2.26%, 1000=4.89% 00:14:08.643 lat (msec) : 2=0.38%, 50=3.76% 00:14:08.643 cpu : usr=0.59%, sys=0.98%, ctx=532, majf=0, minf=2 00:14:08.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.643 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.643 00:14:08.644 Run status group 0 (all jobs): 00:14:08.644 READ: bw=4478KiB/s (4585kB/s), 75.5KiB/s-4396KiB/s (77.3kB/s-4501kB/s), io=4648KiB (4760kB), run=1001-1038msec 00:14:08.644 WRITE: bw=11.6MiB/s (12.1MB/s), 1973KiB/s-6138KiB/s (2020kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1038msec 00:14:08.644 00:14:08.644 Disk stats (read/write): 00:14:08.644 nvme0n1: ios=42/512, merge=0/0, ticks=1616/192, in_queue=1808, util=97.90% 00:14:08.644 nvme0n2: ios=66/512, merge=0/0, ticks=861/123, in_queue=984, util=99.80% 00:14:08.644 nvme0n3: ios=1049/1160, merge=0/0, ticks=1344/405, in_queue=1749, util=97.70% 00:14:08.644 nvme0n4: ios=40/512, merge=0/0, ticks=812/173, in_queue=985, util=91.02% 00:14:08.644 00:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:08.644 [global] 00:14:08.644 thread=1 00:14:08.644 invalidate=1 00:14:08.644 rw=randwrite 00:14:08.644 time_based=1 00:14:08.644 runtime=1 00:14:08.644 ioengine=libaio 00:14:08.644 direct=1 00:14:08.644 bs=4096 00:14:08.644 iodepth=1 00:14:08.644 norandommap=0 00:14:08.644 numjobs=1 00:14:08.644 00:14:08.644 verify_dump=1 00:14:08.644 verify_backlog=512 00:14:08.644 verify_state_save=0 00:14:08.644 do_verify=1 00:14:08.644 verify=crc32c-intel 00:14:08.644 [job0] 00:14:08.644 filename=/dev/nvme0n1 00:14:08.644 [job1] 00:14:08.644 filename=/dev/nvme0n2 00:14:08.644 [job2] 00:14:08.644 filename=/dev/nvme0n3 00:14:08.644 [job3] 00:14:08.644 filename=/dev/nvme0n4 00:14:08.644 Could not set queue depth (nvme0n1) 00:14:08.644 Could not set queue depth (nvme0n2) 00:14:08.644 Could not set queue depth (nvme0n3) 00:14:08.644 Could not set queue depth (nvme0n4) 00:14:08.900 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:08.900 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:08.900 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:08.900 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:08.900 fio-3.35 00:14:08.900 Starting 4 threads 00:14:10.281 00:14:10.281 job0: (groupid=0, jobs=1): err= 0: pid=2628058: Tue Jul 16 00:50:44 2024 00:14:10.281 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:14:10.281 slat (nsec): min=6983, max=32112, avg=26132.00, stdev=8486.51 00:14:10.281 clat (usec): min=40869, max=41105, avg=40962.86, stdev=54.51 00:14:10.281 lat (usec): min=40876, max=41120, avg=40988.99, stdev=53.45 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:10.281 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:10.281 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:10.281 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:10.281 | 99.99th=[41157] 00:14:10.281 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:14:10.281 slat (nsec): min=5914, max=31158, avg=7008.30, stdev=1752.79 00:14:10.281 clat (usec): min=193, max=445, avg=217.83, stdev=17.09 00:14:10.281 lat (usec): min=200, max=476, avg=224.84, stdev=17.89 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:14:10.281 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:14:10.281 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 245], 00:14:10.281 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 445], 99.95th=[ 445], 00:14:10.281 | 99.99th=[ 445] 00:14:10.281 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:14:10.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:10.281 lat (usec) : 250=93.82%, 500=2.06% 00:14:10.281 lat (msec) : 50=4.12% 00:14:10.281 cpu : usr=0.29%, sys=0.20%, ctx=534, majf=0, minf=1 00:14:10.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:10.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:10.281 job1: (groupid=0, jobs=1): err= 0: pid=2628059: Tue Jul 16 00:50:44 2024 00:14:10.281 read: IOPS=19, BW=77.9KiB/s (79.8kB/s)(80.0KiB/1027msec) 00:14:10.281 slat (nsec): min=8683, max=33586, avg=28353.60, stdev=7943.89 00:14:10.281 clat (usec): min=40946, max=42075, avg=41672.55, stdev=428.23 00:14:10.281 lat (usec): min=40978, max=42090, avg=41700.90, stdev=430.50 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:10.281 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:14:10.281 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:10.281 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:10.281 | 99.99th=[42206] 00:14:10.281 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:14:10.281 slat (nsec): min=7123, max=46074, avg=11431.39, stdev=6603.47 00:14:10.281 clat (usec): min=216, max=556, avg=361.67, stdev=52.98 00:14:10.281 lat (usec): min=224, max=564, avg=373.10, stdev=51.55 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[ 227], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 306], 00:14:10.281 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:14:10.281 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 429], 00:14:10.281 | 99.00th=[ 498], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 553], 00:14:10.281 | 99.99th=[ 553] 00:14:10.281 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:14:10.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:10.281 lat (usec) : 250=3.76%, 500=91.54%, 750=0.94% 00:14:10.281 lat (msec) : 50=3.76% 00:14:10.281 cpu : usr=0.29%, sys=0.88%, ctx=532, majf=0, minf=1 00:14:10.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:10.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:10.281 job2: (groupid=0, jobs=1): err= 0: pid=2628060: Tue Jul 16 00:50:44 2024 00:14:10.281 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:14:10.281 slat (nsec): min=6296, max=34725, avg=29105.73, stdev=8827.84 00:14:10.281 clat (usec): min=40655, max=41055, avg=40949.56, stdev=77.89 00:14:10.281 lat (usec): min=40661, max=41071, avg=40978.67, stdev=80.17 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:10.281 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:10.281 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:10.281 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:10.281 | 99.99th=[41157] 00:14:10.281 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:14:10.281 slat (nsec): min=6358, max=23965, avg=7930.15, stdev=2368.80 00:14:10.281 clat (usec): min=201, max=395, avg=241.37, stdev=23.26 00:14:10.281 lat (usec): min=210, max=410, avg=249.30, stdev=24.14 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:14:10.281 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:14:10.281 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 262], 00:14:10.281 | 99.00th=[ 351], 99.50th=[ 379], 99.90th=[ 396], 99.95th=[ 396], 00:14:10.281 | 99.99th=[ 396] 00:14:10.281 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:14:10.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:10.281 lat (usec) : 250=83.52%, 500=12.36% 00:14:10.281 lat (msec) : 50=4.12% 00:14:10.281 cpu : usr=0.10%, sys=0.58%, ctx=536, majf=0, minf=2 00:14:10.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:10.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:10.281 job3: (groupid=0, jobs=1): err= 0: pid=2628061: Tue Jul 16 00:50:44 2024 00:14:10.281 read: IOPS=147, BW=590KiB/s (604kB/s)(596KiB/1010msec) 00:14:10.281 slat (nsec): min=7900, max=47893, avg=21954.09, stdev=9090.30 00:14:10.281 clat (usec): min=398, max=41985, avg=5413.40, stdev=13440.78 00:14:10.281 lat (usec): min=411, max=42020, avg=5435.36, stdev=13443.95 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[ 400], 5.00th=[ 404], 10.00th=[ 420], 20.00th=[ 429], 00:14:10.281 | 30.00th=[ 437], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 461], 00:14:10.281 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[41157], 95.00th=[41681], 00:14:10.281 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:10.281 | 99.99th=[42206] 00:14:10.281 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:14:10.281 slat (nsec): min=8063, max=53776, avg=13870.91, stdev=7314.48 00:14:10.281 clat (usec): min=231, max=583, avg=364.43, stdev=41.20 00:14:10.281 lat (usec): min=250, max=593, avg=378.31, stdev=38.61 00:14:10.281 clat percentiles (usec): 00:14:10.281 | 1.00th=[ 245], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 338], 00:14:10.281 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 379], 00:14:10.281 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 416], 00:14:10.281 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 586], 99.95th=[ 586], 00:14:10.281 | 99.99th=[ 586] 00:14:10.281 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:14:10.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:10.281 lat (usec) : 250=1.21%, 500=95.16%, 750=0.91% 00:14:10.281 lat (msec) : 50=2.72% 00:14:10.281 cpu : usr=0.40%, sys=1.09%, ctx=664, majf=0, minf=1 00:14:10.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:10.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.281 issued rwts: total=149,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:10.281 00:14:10.281 Run status group 0 (all jobs): 00:14:10.281 READ: bw=824KiB/s (844kB/s), 77.9KiB/s-590KiB/s (79.8kB/s-604kB/s), io=852KiB (872kB), run=1010-1034msec 00:14:10.281 WRITE: bw=7923KiB/s (8113kB/s), 1981KiB/s-2028KiB/s (2028kB/s-2076kB/s), io=8192KiB (8389kB), run=1010-1034msec 00:14:10.281 00:14:10.281 Disk stats (read/write): 00:14:10.281 nvme0n1: ios=67/512, merge=0/0, ticks=888/106, in_queue=994, util=89.28% 00:14:10.281 nvme0n2: ios=65/512, merge=0/0, ticks=706/182, in_queue=888, util=89.26% 00:14:10.281 nvme0n3: ios=40/512, merge=0/0, ticks=1615/120, in_queue=1735, util=100.00% 00:14:10.281 nvme0n4: ios=61/512, merge=0/0, ticks=1596/183, in_queue=1779, util=97.85% 00:14:10.281 00:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:10.281 [global] 00:14:10.281 thread=1 00:14:10.281 invalidate=1 00:14:10.282 rw=write 00:14:10.282 time_based=1 00:14:10.282 runtime=1 00:14:10.282 ioengine=libaio 00:14:10.282 direct=1 00:14:10.282 bs=4096 00:14:10.282 iodepth=128 00:14:10.282 norandommap=0 00:14:10.282 numjobs=1 00:14:10.282 00:14:10.282 verify_dump=1 00:14:10.282 verify_backlog=512 00:14:10.282 verify_state_save=0 00:14:10.282 do_verify=1 00:14:10.282 verify=crc32c-intel 00:14:10.282 [job0] 00:14:10.282 filename=/dev/nvme0n1 00:14:10.282 [job1] 00:14:10.282 filename=/dev/nvme0n2 00:14:10.282 [job2] 00:14:10.282 filename=/dev/nvme0n3 00:14:10.282 [job3] 00:14:10.282 filename=/dev/nvme0n4 00:14:10.282 Could not set queue depth (nvme0n1) 00:14:10.282 Could not set queue depth (nvme0n2) 00:14:10.282 Could not set queue depth (nvme0n3) 00:14:10.282 Could not set queue depth (nvme0n4) 00:14:10.282 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.282 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.282 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.282 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.282 fio-3.35 00:14:10.282 Starting 4 threads 00:14:11.656 00:14:11.656 job0: (groupid=0, jobs=1): err= 0: pid=2628405: Tue Jul 16 00:50:46 2024 00:14:11.656 read: IOPS=2885, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1005msec) 00:14:11.656 slat (usec): min=2, max=17639, avg=174.00, stdev=1226.36 00:14:11.656 clat (usec): min=3925, max=78059, avg=23370.42, stdev=12637.10 00:14:11.656 lat (usec): min=8011, max=83186, avg=23544.42, stdev=12741.69 00:14:11.656 clat percentiles (usec): 00:14:11.656 | 1.00th=[ 8160], 5.00th=[11469], 10.00th=[12387], 20.00th=[13829], 00:14:11.656 | 30.00th=[16057], 40.00th=[17695], 50.00th=[18482], 60.00th=[21627], 00:14:11.656 | 70.00th=[24249], 80.00th=[29230], 90.00th=[42730], 95.00th=[55837], 00:14:11.656 | 99.00th=[60556], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:14:11.656 | 99.99th=[78119] 00:14:11.656 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:14:11.656 slat (usec): min=3, max=22472, avg=151.19, stdev=1060.80 00:14:11.656 clat (usec): min=1448, max=75049, avg=19442.90, stdev=11976.61 00:14:11.656 lat (usec): min=1460, max=75055, avg=19594.09, stdev=12066.05 00:14:11.656 clat percentiles (usec): 00:14:11.656 | 1.00th=[ 3949], 5.00th=[ 8160], 10.00th=[ 9765], 20.00th=[12780], 00:14:11.656 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14877], 60.00th=[16450], 00:14:11.656 | 70.00th=[20841], 80.00th=[25035], 90.00th=[33424], 95.00th=[42206], 00:14:11.656 | 99.00th=[66323], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:14:11.656 | 99.99th=[74974] 00:14:11.656 bw ( KiB/s): min=11072, max=13504, per=19.97%, avg=12288.00, stdev=1719.68, samples=2 00:14:11.656 iops : min= 2768, max= 3376, avg=3072.00, stdev=429.92, samples=2 00:14:11.656 lat (msec) : 2=0.23%, 4=0.45%, 10=5.76%, 20=54.29%, 50=34.58% 00:14:11.656 lat (msec) : 100=4.69% 00:14:11.656 cpu : usr=3.19%, sys=4.58%, ctx=247, majf=0, minf=1 00:14:11.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:11.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:11.656 issued rwts: total=2900,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:11.656 job1: (groupid=0, jobs=1): err= 0: pid=2628406: Tue Jul 16 00:50:46 2024 00:14:11.656 read: IOPS=4740, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1002msec) 00:14:11.656 slat (usec): min=2, max=11841, avg=98.16, stdev=547.81 00:14:11.656 clat (usec): min=845, max=60074, avg=12050.38, stdev=3771.50 00:14:11.656 lat (usec): min=3671, max=60081, avg=12148.54, stdev=3796.80 00:14:11.656 clat percentiles (usec): 00:14:11.656 | 1.00th=[ 4228], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[10945], 00:14:11.656 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:14:11.656 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13960], 95.00th=[15139], 00:14:11.656 | 99.00th=[21890], 99.50th=[24249], 99.90th=[58983], 99.95th=[58983], 00:14:11.656 | 99.99th=[60031] 00:14:11.656 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:14:11.656 slat (usec): min=3, max=7188, avg=95.80, stdev=498.92 00:14:11.656 clat (usec): min=5397, max=51370, avg=13532.30, stdev=4891.48 00:14:11.656 lat (usec): min=5401, max=51377, avg=13628.10, stdev=4908.49 00:14:11.656 clat percentiles (usec): 00:14:11.656 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11076], 00:14:11.656 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13173], 00:14:11.656 | 70.00th=[13698], 80.00th=[14484], 90.00th=[16057], 95.00th=[19006], 00:14:11.656 | 99.00th=[41681], 99.50th=[44303], 99.90th=[49021], 99.95th=[51119], 00:14:11.656 | 99.99th=[51119] 00:14:11.656 bw ( KiB/s): min=20480, max=20521, per=33.32%, avg=20500.50, stdev=28.99, samples=2 00:14:11.656 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:14:11.656 lat (usec) : 1000=0.01% 00:14:11.656 lat (msec) : 4=0.23%, 10=9.09%, 20=87.79%, 50=2.60%, 100=0.27% 00:14:11.656 cpu : usr=5.29%, sys=8.59%, ctx=475, majf=0, minf=1 00:14:11.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:11.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:11.656 issued rwts: total=4750,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:11.656 job2: (groupid=0, jobs=1): err= 0: pid=2628407: Tue Jul 16 00:50:46 2024 00:14:11.656 read: IOPS=2720, BW=10.6MiB/s (11.1MB/s)(11.0MiB/1032msec) 00:14:11.656 slat (usec): min=2, max=21062, avg=161.31, stdev=1130.90 00:14:11.656 clat (usec): min=4830, max=86360, avg=20861.35, stdev=12449.79 00:14:11.656 lat (usec): min=4837, max=86367, avg=21022.67, stdev=12527.31 00:14:11.656 clat percentiles (usec): 00:14:11.656 | 1.00th=[ 7439], 5.00th=[10945], 10.00th=[12256], 20.00th=[13566], 00:14:11.656 | 30.00th=[15139], 40.00th=[15664], 50.00th=[16712], 60.00th=[18482], 00:14:11.656 | 70.00th=[20841], 80.00th=[23725], 90.00th=[40633], 95.00th=[49546], 00:14:11.656 | 99.00th=[65799], 99.50th=[79168], 99.90th=[86508], 99.95th=[86508], 00:14:11.656 | 99.99th=[86508] 00:14:11.656 write: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(12.0MiB/1032msec); 0 zone resets 00:14:11.656 slat (usec): min=4, max=22205, avg=159.36, stdev=849.52 00:14:11.656 clat (usec): min=3861, max=67709, avg=22558.82, stdev=14010.28 00:14:11.656 lat (usec): min=3868, max=67730, avg=22718.17, stdev=14085.84 00:14:11.656 clat percentiles (usec): 00:14:11.656 | 1.00th=[ 4817], 5.00th=[ 8356], 10.00th=[11076], 20.00th=[12780], 00:14:11.656 | 30.00th=[13304], 40.00th=[15139], 50.00th=[18482], 60.00th=[19792], 00:14:11.656 | 70.00th=[24773], 80.00th=[29492], 90.00th=[46400], 95.00th=[57410], 00:14:11.656 | 99.00th=[64750], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:14:11.656 | 99.99th=[67634] 00:14:11.656 bw ( KiB/s): min=11984, max=12592, per=19.97%, avg=12288.00, stdev=429.92, samples=2 00:14:11.656 iops : min= 2996, max= 3148, avg=3072.00, stdev=107.48, samples=2 00:14:11.656 lat (msec) : 4=0.20%, 10=4.95%, 20=56.82%, 50=31.31%, 100=6.72% 00:14:11.656 cpu : usr=4.75%, sys=4.07%, ctx=389, majf=0, minf=1 00:14:11.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:11.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:11.656 issued rwts: total=2808,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:11.656 job3: (groupid=0, jobs=1): err= 0: pid=2628408: Tue Jul 16 00:50:46 2024 00:14:11.656 read: IOPS=4216, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec) 00:14:11.656 slat (usec): min=2, max=11405, avg=107.11, stdev=760.53 00:14:11.656 clat (usec): min=1727, max=29015, avg=13759.50, stdev=3642.33 00:14:11.656 lat (usec): min=1734, max=34807, avg=13866.60, stdev=3681.45 00:14:11.656 clat percentiles (usec): 00:14:11.656 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[10683], 20.00th=[11207], 00:14:11.656 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[12911], 00:14:11.656 | 70.00th=[14222], 80.00th=[16057], 90.00th=[19268], 95.00th=[22152], 00:14:11.656 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:14:11.656 | 99.99th=[28967] 00:14:11.656 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:14:11.656 slat (usec): min=3, max=19939, avg=110.88, stdev=896.30 00:14:11.656 clat (usec): min=2153, max=54124, avg=14758.12, stdev=8643.76 00:14:11.657 lat (usec): min=2174, max=54140, avg=14868.99, stdev=8722.91 00:14:11.657 clat percentiles (usec): 00:14:11.657 | 1.00th=[ 3949], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 8848], 00:14:11.657 | 30.00th=[10290], 40.00th=[11863], 50.00th=[12387], 60.00th=[12649], 00:14:11.657 | 70.00th=[13698], 80.00th=[17433], 90.00th=[32900], 95.00th=[34866], 00:14:11.657 | 99.00th=[36963], 99.50th=[41681], 99.90th=[46400], 99.95th=[51643], 00:14:11.657 | 99.99th=[54264] 00:14:11.657 bw ( KiB/s): min=16176, max=20688, per=29.96%, avg=18432.00, stdev=3190.47, samples=2 00:14:11.657 iops : min= 4044, max= 5172, avg=4608.00, stdev=797.62, samples=2 00:14:11.657 lat (msec) : 2=0.05%, 4=0.66%, 10=16.36%, 20=68.95%, 50=13.96% 00:14:11.657 lat (msec) : 100=0.03% 00:14:11.657 cpu : usr=4.50%, sys=6.99%, ctx=388, majf=0, minf=1 00:14:11.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:11.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:11.657 issued rwts: total=4225,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:11.657 00:14:11.657 Run status group 0 (all jobs): 00:14:11.657 READ: bw=55.6MiB/s (58.3MB/s), 10.6MiB/s-18.5MiB/s (11.1MB/s-19.4MB/s), io=57.4MiB (60.1MB), run=1002-1032msec 00:14:11.657 WRITE: bw=60.1MiB/s (63.0MB/s), 11.6MiB/s-20.0MiB/s (12.2MB/s-20.9MB/s), io=62.0MiB (65.0MB), run=1002-1032msec 00:14:11.657 00:14:11.657 Disk stats (read/write): 00:14:11.657 nvme0n1: ios=2610/2607, merge=0/0, ticks=33986/26585, in_queue=60571, util=87.17% 00:14:11.657 nvme0n2: ios=4136/4228, merge=0/0, ticks=18616/19027, in_queue=37643, util=97.36% 00:14:11.657 nvme0n3: ios=2435/2560, merge=0/0, ticks=47947/55053, in_queue=103000, util=98.12% 00:14:11.657 nvme0n4: ios=3606/3711, merge=0/0, ticks=37739/36118, in_queue=73857, util=98.95% 00:14:11.657 00:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:11.657 [global] 00:14:11.657 thread=1 00:14:11.657 invalidate=1 00:14:11.657 rw=randwrite 00:14:11.657 time_based=1 00:14:11.657 runtime=1 00:14:11.657 ioengine=libaio 00:14:11.657 direct=1 00:14:11.657 bs=4096 00:14:11.657 iodepth=128 00:14:11.657 norandommap=0 00:14:11.657 numjobs=1 00:14:11.657 00:14:11.657 verify_dump=1 00:14:11.657 verify_backlog=512 00:14:11.657 verify_state_save=0 00:14:11.657 do_verify=1 00:14:11.657 verify=crc32c-intel 00:14:11.657 [job0] 00:14:11.657 filename=/dev/nvme0n1 00:14:11.657 [job1] 00:14:11.657 filename=/dev/nvme0n2 00:14:11.657 [job2] 00:14:11.657 filename=/dev/nvme0n3 00:14:11.657 [job3] 00:14:11.657 filename=/dev/nvme0n4 00:14:11.657 Could not set queue depth (nvme0n1) 00:14:11.657 Could not set queue depth (nvme0n2) 00:14:11.657 Could not set queue depth (nvme0n3) 00:14:11.657 Could not set queue depth (nvme0n4) 00:14:11.657 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.657 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.657 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.657 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.657 fio-3.35 00:14:11.657 Starting 4 threads 00:14:13.044 00:14:13.044 job0: (groupid=0, jobs=1): err= 0: pid=2628638: Tue Jul 16 00:50:47 2024 00:14:13.044 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:14:13.044 slat (usec): min=2, max=12923, avg=123.07, stdev=735.28 00:14:13.044 clat (usec): min=7750, max=30959, avg=16040.39, stdev=4699.72 00:14:13.044 lat (usec): min=7755, max=30977, avg=16163.46, stdev=4745.75 00:14:13.044 clat percentiles (usec): 00:14:13.044 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11469], 00:14:13.044 | 30.00th=[13173], 40.00th=[14222], 50.00th=[15926], 60.00th=[16909], 00:14:13.044 | 70.00th=[18220], 80.00th=[20317], 90.00th=[22414], 95.00th=[25297], 00:14:13.044 | 99.00th=[27395], 99.50th=[27657], 99.90th=[28181], 99.95th=[29754], 00:14:13.044 | 99.99th=[31065] 00:14:13.044 write: IOPS=3622, BW=14.2MiB/s (14.8MB/s)(14.2MiB/1005msec); 0 zone resets 00:14:13.044 slat (usec): min=4, max=10230, avg=141.93, stdev=781.66 00:14:13.044 clat (usec): min=4270, max=71363, avg=18961.97, stdev=11222.79 00:14:13.044 lat (usec): min=4288, max=71405, avg=19103.90, stdev=11302.00 00:14:13.044 clat percentiles (usec): 00:14:13.044 | 1.00th=[ 7046], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:14:13.044 | 30.00th=[11731], 40.00th=[13698], 50.00th=[15664], 60.00th=[18482], 00:14:13.044 | 70.00th=[19268], 80.00th=[25822], 90.00th=[28443], 95.00th=[39060], 00:14:13.044 | 99.00th=[66847], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:14:13.045 | 99.99th=[71828] 00:14:13.045 bw ( KiB/s): min= 9424, max=19248, per=23.71%, avg=14336.00, stdev=6946.62, samples=2 00:14:13.045 iops : min= 2356, max= 4812, avg=3584.00, stdev=1736.65, samples=2 00:14:13.045 lat (msec) : 10=9.94%, 20=64.60%, 50=23.83%, 100=1.63% 00:14:13.045 cpu : usr=4.88%, sys=7.67%, ctx=295, majf=0, minf=13 00:14:13.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:14:13.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:13.045 issued rwts: total=3584,3641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:13.045 job1: (groupid=0, jobs=1): err= 0: pid=2628639: Tue Jul 16 00:50:47 2024 00:14:13.045 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:14:13.045 slat (usec): min=2, max=7423, avg=100.33, stdev=536.77 00:14:13.045 clat (usec): min=5463, max=31638, avg=13239.44, stdev=5157.35 00:14:13.045 lat (usec): min=5478, max=31673, avg=13339.77, stdev=5192.06 00:14:13.045 clat percentiles (usec): 00:14:13.045 | 1.00th=[ 7439], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[ 9896], 00:14:13.045 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[12256], 00:14:13.045 | 70.00th=[13173], 80.00th=[15664], 90.00th=[21627], 95.00th=[25560], 00:14:13.045 | 99.00th=[30278], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:14:13.045 | 99.99th=[31589] 00:14:13.045 write: IOPS=4712, BW=18.4MiB/s (19.3MB/s)(18.4MiB/1001msec); 0 zone resets 00:14:13.045 slat (usec): min=3, max=38193, avg=102.75, stdev=752.25 00:14:13.045 clat (usec): min=375, max=53849, avg=13774.86, stdev=7322.52 00:14:13.045 lat (usec): min=402, max=53891, avg=13877.60, stdev=7351.40 00:14:13.045 clat percentiles (usec): 00:14:13.045 | 1.00th=[ 4948], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9896], 00:14:13.045 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11731], 60.00th=[12387], 00:14:13.045 | 70.00th=[14222], 80.00th=[16712], 90.00th=[19268], 95.00th=[29230], 00:14:13.045 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:14:13.045 | 99.99th=[53740] 00:14:13.045 bw ( KiB/s): min=17184, max=17184, per=28.42%, avg=17184.00, stdev= 0.00, samples=1 00:14:13.045 iops : min= 4296, max= 4296, avg=4296.00, stdev= 0.00, samples=1 00:14:13.045 lat (usec) : 500=0.02% 00:14:13.045 lat (msec) : 4=0.15%, 10=24.40%, 20=65.57%, 50=9.86%, 100=0.01% 00:14:13.045 cpu : usr=5.70%, sys=10.20%, ctx=419, majf=0, minf=17 00:14:13.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:13.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:13.045 issued rwts: total=4608,4717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:13.045 job2: (groupid=0, jobs=1): err= 0: pid=2628640: Tue Jul 16 00:50:47 2024 00:14:13.045 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:14:13.045 slat (usec): min=2, max=10537, avg=122.78, stdev=660.96 00:14:13.045 clat (usec): min=6838, max=35750, avg=15882.44, stdev=5421.88 00:14:13.045 lat (usec): min=6841, max=35762, avg=16005.21, stdev=5458.11 00:14:13.045 clat percentiles (usec): 00:14:13.045 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11600], 20.00th=[12518], 00:14:13.045 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13960], 60.00th=[14484], 00:14:13.045 | 70.00th=[16188], 80.00th=[19530], 90.00th=[24511], 95.00th=[29492], 00:14:13.045 | 99.00th=[32113], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:14:13.045 | 99.99th=[35914] 00:14:13.045 write: IOPS=4277, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1003msec); 0 zone resets 00:14:13.045 slat (usec): min=3, max=9455, avg=109.43, stdev=584.51 00:14:13.045 clat (usec): min=482, max=31795, avg=14440.53, stdev=3970.06 00:14:13.045 lat (usec): min=3811, max=32641, avg=14549.96, stdev=3993.27 00:14:13.045 clat percentiles (usec): 00:14:13.045 | 1.00th=[ 5276], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12125], 00:14:13.045 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13960], 00:14:13.045 | 70.00th=[15270], 80.00th=[16712], 90.00th=[20317], 95.00th=[22938], 00:14:13.045 | 99.00th=[27657], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:14:13.045 | 99.99th=[31851] 00:14:13.045 bw ( KiB/s): min=13064, max=20232, per=27.53%, avg=16648.00, stdev=5068.54, samples=2 00:14:13.045 iops : min= 3266, max= 5058, avg=4162.00, stdev=1267.14, samples=2 00:14:13.045 lat (usec) : 500=0.01% 00:14:13.045 lat (msec) : 4=0.02%, 10=3.90%, 20=82.09%, 50=13.98% 00:14:13.045 cpu : usr=4.19%, sys=4.89%, ctx=412, majf=0, minf=9 00:14:13.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:13.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:13.045 issued rwts: total=4096,4290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:13.045 job3: (groupid=0, jobs=1): err= 0: pid=2628641: Tue Jul 16 00:50:47 2024 00:14:13.045 read: IOPS=2287, BW=9149KiB/s (9369kB/s)(9204KiB/1006msec) 00:14:13.045 slat (usec): min=3, max=13924, avg=217.53, stdev=1192.01 00:14:13.045 clat (usec): min=562, max=50430, avg=26193.14, stdev=7190.07 00:14:13.045 lat (usec): min=9704, max=50482, avg=26410.67, stdev=7248.77 00:14:13.045 clat percentiles (usec): 00:14:13.045 | 1.00th=[10290], 5.00th=[16581], 10.00th=[16712], 20.00th=[18220], 00:14:13.045 | 30.00th=[22152], 40.00th=[24249], 50.00th=[25560], 60.00th=[27657], 00:14:13.045 | 70.00th=[30278], 80.00th=[32900], 90.00th=[34866], 95.00th=[37487], 00:14:13.045 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[48497], 00:14:13.045 | 99.99th=[50594] 00:14:13.045 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:14:13.045 slat (usec): min=4, max=13076, avg=184.27, stdev=963.44 00:14:13.045 clat (usec): min=11501, max=47105, avg=25778.02, stdev=6528.86 00:14:13.045 lat (usec): min=11513, max=47125, avg=25962.29, stdev=6598.50 00:14:13.045 clat percentiles (usec): 00:14:13.045 | 1.00th=[14746], 5.00th=[15270], 10.00th=[15926], 20.00th=[20055], 00:14:13.045 | 30.00th=[23200], 40.00th=[24249], 50.00th=[25822], 60.00th=[27657], 00:14:13.045 | 70.00th=[28181], 80.00th=[29754], 90.00th=[34341], 95.00th=[38011], 00:14:13.045 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:14:13.045 | 99.99th=[46924] 00:14:13.045 bw ( KiB/s): min= 8952, max=11528, per=16.93%, avg=10240.00, stdev=1821.51, samples=2 00:14:13.045 iops : min= 2238, max= 2882, avg=2560.00, stdev=455.38, samples=2 00:14:13.045 lat (usec) : 750=0.02% 00:14:13.045 lat (msec) : 10=0.25%, 20=20.24%, 50=79.47%, 100=0.02% 00:14:13.045 cpu : usr=2.89%, sys=6.77%, ctx=226, majf=0, minf=11 00:14:13.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:13.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:13.045 issued rwts: total=2301,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:13.045 00:14:13.045 Run status group 0 (all jobs): 00:14:13.045 READ: bw=56.6MiB/s (59.4MB/s), 9149KiB/s-18.0MiB/s (9369kB/s-18.9MB/s), io=57.0MiB (59.8MB), run=1001-1006msec 00:14:13.045 WRITE: bw=59.1MiB/s (61.9MB/s), 9.94MiB/s-18.4MiB/s (10.4MB/s-19.3MB/s), io=59.4MiB (62.3MB), run=1001-1006msec 00:14:13.045 00:14:13.045 Disk stats (read/write): 00:14:13.045 nvme0n1: ios=3114/3279, merge=0/0, ticks=23773/27888, in_queue=51661, util=99.90% 00:14:13.045 nvme0n2: ios=3621/3798, merge=0/0, ticks=20125/17287, in_queue=37412, util=99.39% 00:14:13.045 nvme0n3: ios=3502/3584, merge=0/0, ticks=18062/15335, in_queue=33397, util=97.50% 00:14:13.045 nvme0n4: ios=2096/2199, merge=0/0, ticks=17191/16150, in_queue=33341, util=98.95% 00:14:13.045 00:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:13.045 00:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2628775 00:14:13.045 00:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:13.045 00:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:13.045 [global] 00:14:13.045 thread=1 00:14:13.045 invalidate=1 00:14:13.045 rw=read 00:14:13.045 time_based=1 00:14:13.045 runtime=10 00:14:13.045 ioengine=libaio 00:14:13.045 direct=1 00:14:13.045 bs=4096 00:14:13.045 iodepth=1 00:14:13.045 norandommap=1 00:14:13.045 numjobs=1 00:14:13.045 00:14:13.045 [job0] 00:14:13.045 filename=/dev/nvme0n1 00:14:13.045 [job1] 00:14:13.045 filename=/dev/nvme0n2 00:14:13.045 [job2] 00:14:13.045 filename=/dev/nvme0n3 00:14:13.045 [job3] 00:14:13.045 filename=/dev/nvme0n4 00:14:13.045 Could not set queue depth (nvme0n1) 00:14:13.045 Could not set queue depth (nvme0n2) 00:14:13.045 Could not set queue depth (nvme0n3) 00:14:13.045 Could not set queue depth (nvme0n4) 00:14:13.303 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.303 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.303 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.303 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.303 fio-3.35 00:14:13.303 Starting 4 threads 00:14:15.829 00:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:16.394 00:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:16.394 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2768896, buflen=4096 00:14:16.394 fio: pid=2628870, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:16.394 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:16.394 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:16.394 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1978368, buflen=4096 00:14:16.394 fio: pid=2628869, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:16.961 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=25853952, buflen=4096 00:14:16.961 fio: pid=2628867, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:16.961 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:16.961 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:16.961 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:16.962 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:16.962 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=22671360, buflen=4096 00:14:16.962 fio: pid=2628868, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:17.220 00:14:17.220 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2628867: Tue Jul 16 00:50:51 2024 00:14:17.220 read: IOPS=1825, BW=7299KiB/s (7474kB/s)(24.7MiB/3459msec) 00:14:17.220 slat (usec): min=4, max=30167, avg=27.83, stdev=425.81 00:14:17.220 clat (usec): min=295, max=41201, avg=511.84, stdev=1772.47 00:14:17.220 lat (usec): min=301, max=41252, avg=538.48, stdev=1820.72 00:14:17.220 clat percentiles (usec): 00:14:17.220 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 351], 20.00th=[ 379], 00:14:17.220 | 30.00th=[ 392], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 449], 00:14:17.220 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 523], 95.00th=[ 545], 00:14:17.220 | 99.00th=[ 619], 99.50th=[ 660], 99.90th=[41157], 99.95th=[41157], 00:14:17.220 | 99.99th=[41157] 00:14:17.220 bw ( KiB/s): min= 5104, max= 9536, per=52.19%, avg=7248.00, stdev=1763.47, samples=6 00:14:17.220 iops : min= 1276, max= 2384, avg=1812.00, stdev=440.87, samples=6 00:14:17.220 lat (usec) : 500=81.10%, 750=18.60%, 1000=0.06% 00:14:17.220 lat (msec) : 4=0.02%, 10=0.02%, 50=0.19% 00:14:17.220 cpu : usr=1.50%, sys=3.90%, ctx=6320, majf=0, minf=1 00:14:17.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:17.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 issued rwts: total=6313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:17.220 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2628868: Tue Jul 16 00:50:51 2024 00:14:17.220 read: IOPS=1477, BW=5910KiB/s (6052kB/s)(21.6MiB/3746msec) 00:14:17.220 slat (usec): min=4, max=14534, avg=21.43, stdev=276.92 00:14:17.220 clat (usec): min=300, max=43704, avg=647.24, stdev=3157.85 00:14:17.220 lat (usec): min=305, max=58238, avg=668.67, stdev=3205.65 00:14:17.220 clat percentiles (usec): 00:14:17.220 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:14:17.220 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 392], 00:14:17.220 | 70.00th=[ 412], 80.00th=[ 449], 90.00th=[ 494], 95.00th=[ 510], 00:14:17.220 | 99.00th=[ 644], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:17.220 | 99.99th=[43779] 00:14:17.220 bw ( KiB/s): min= 3171, max=10208, per=44.87%, avg=6232.43, stdev=2534.86, samples=7 00:14:17.220 iops : min= 792, max= 2552, avg=1558.00, stdev=633.87, samples=7 00:14:17.220 lat (usec) : 500=92.30%, 750=6.90%, 1000=0.07% 00:14:17.220 lat (msec) : 2=0.02%, 4=0.02%, 10=0.04%, 50=0.63% 00:14:17.220 cpu : usr=1.17%, sys=2.88%, ctx=5539, majf=0, minf=1 00:14:17.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:17.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:17.220 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2628869: Tue Jul 16 00:50:51 2024 00:14:17.220 read: IOPS=151, BW=606KiB/s (620kB/s)(1932KiB/3189msec) 00:14:17.220 slat (usec): min=4, max=12848, avg=38.96, stdev=583.48 00:14:17.220 clat (usec): min=324, max=45815, avg=6513.07, stdev=14517.38 00:14:17.220 lat (usec): min=330, max=45842, avg=6552.10, stdev=14570.67 00:14:17.220 clat percentiles (usec): 00:14:17.220 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 355], 00:14:17.220 | 30.00th=[ 367], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 420], 00:14:17.220 | 70.00th=[ 449], 80.00th=[ 515], 90.00th=[41157], 95.00th=[41157], 00:14:17.220 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:14:17.220 | 99.99th=[45876] 00:14:17.220 bw ( KiB/s): min= 96, max= 368, per=1.04%, avg=144.00, stdev=109.81, samples=6 00:14:17.220 iops : min= 24, max= 92, avg=36.00, stdev=27.45, samples=6 00:14:17.220 lat (usec) : 500=78.72%, 750=5.99% 00:14:17.220 lat (msec) : 50=15.08% 00:14:17.220 cpu : usr=0.03%, sys=0.28%, ctx=485, majf=0, minf=1 00:14:17.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:17.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 issued rwts: total=484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:17.220 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2628870: Tue Jul 16 00:50:51 2024 00:14:17.220 read: IOPS=232, BW=927KiB/s (949kB/s)(2704KiB/2917msec) 00:14:17.220 slat (nsec): min=5596, max=54653, avg=18534.44, stdev=9545.18 00:14:17.220 clat (usec): min=324, max=41535, avg=4250.58, stdev=11827.71 00:14:17.220 lat (usec): min=331, max=41568, avg=4269.12, stdev=11829.19 00:14:17.220 clat percentiles (usec): 00:14:17.220 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 383], 00:14:17.220 | 30.00th=[ 396], 40.00th=[ 412], 50.00th=[ 424], 60.00th=[ 441], 00:14:17.220 | 70.00th=[ 478], 80.00th=[ 529], 90.00th=[ 644], 95.00th=[41157], 00:14:17.220 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:17.220 | 99.99th=[41681] 00:14:17.220 bw ( KiB/s): min= 96, max= 2536, per=5.42%, avg=753.60, stdev=1058.74, samples=5 00:14:17.220 iops : min= 24, max= 634, avg=188.40, stdev=264.69, samples=5 00:14:17.220 lat (usec) : 500=73.56%, 750=16.84% 00:14:17.220 lat (msec) : 50=9.45% 00:14:17.220 cpu : usr=0.17%, sys=0.48%, ctx=677, majf=0, minf=1 00:14:17.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:17.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.220 issued rwts: total=677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:17.220 00:14:17.220 Run status group 0 (all jobs): 00:14:17.220 READ: bw=13.6MiB/s (14.2MB/s), 606KiB/s-7299KiB/s (620kB/s-7474kB/s), io=50.8MiB (53.3MB), run=2917-3746msec 00:14:17.220 00:14:17.220 Disk stats (read/write): 00:14:17.220 nvme0n1: ios=6190/0, merge=0/0, ticks=3207/0, in_queue=3207, util=99.00% 00:14:17.220 nvme0n2: ios=5532/0, merge=0/0, ticks=3396/0, in_queue=3396, util=95.66% 00:14:17.220 nvme0n3: ios=314/0, merge=0/0, ticks=3083/0, in_queue=3083, util=96.41% 00:14:17.220 nvme0n4: ios=675/0, merge=0/0, ticks=2825/0, in_queue=2825, util=96.75% 00:14:17.220 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:17.220 00:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:17.479 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:17.479 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:17.737 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:17.737 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:17.995 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:17.995 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:18.253 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:18.253 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2628775 00:14:18.253 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:18.253 00:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:18.511 nvmf hotplug test: fio failed as expected 00:14:18.511 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.768 rmmod nvme_tcp 00:14:18.768 rmmod nvme_fabrics 00:14:18.768 rmmod nvme_keyring 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2626753 ']' 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2626753 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2626753 ']' 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2626753 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2626753 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2626753' 00:14:18.768 killing process with pid 2626753 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2626753 00:14:18.768 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2626753 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.025 00:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.561 00:50:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.561 00:14:21.561 real 0m23.671s 00:14:21.561 user 1m22.350s 00:14:21.561 sys 0m6.544s 00:14:21.561 00:50:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.561 00:50:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.561 ************************************ 00:14:21.561 END TEST nvmf_fio_target 00:14:21.561 ************************************ 00:14:21.561 00:50:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:21.561 00:50:55 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:21.561 00:50:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:21.561 00:50:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.561 00:50:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.561 ************************************ 00:14:21.561 START TEST nvmf_bdevio 00:14:21.561 ************************************ 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:21.561 * Looking for test storage... 00:14:21.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.561 00:50:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.495 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:23.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:23.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:23.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:23.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:14:23.496 00:14:23.496 --- 10.0.0.2 ping statistics --- 00:14:23.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.496 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:14:23.496 00:14:23.496 --- 10.0.0.1 ping statistics --- 00:14:23.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.496 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.496 00:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2631485 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2631485 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2631485 ']' 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.496 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.496 [2024-07-16 00:50:58.075954] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:14:23.496 [2024-07-16 00:50:58.076024] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.496 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.496 [2024-07-16 00:50:58.143931] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.754 [2024-07-16 00:50:58.270335] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.754 [2024-07-16 00:50:58.270391] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.754 [2024-07-16 00:50:58.270409] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.754 [2024-07-16 00:50:58.270423] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.754 [2024-07-16 00:50:58.270435] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.754 [2024-07-16 00:50:58.270516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:23.754 [2024-07-16 00:50:58.270554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:23.754 [2024-07-16 00:50:58.270607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:23.754 [2024-07-16 00:50:58.270610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.754 [2024-07-16 00:50:58.424540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.754 Malloc0 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.754 [2024-07-16 00:50:58.475605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:23.754 { 00:14:23.754 "params": { 00:14:23.754 "name": "Nvme$subsystem", 00:14:23.754 "trtype": "$TEST_TRANSPORT", 00:14:23.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:23.754 "adrfam": "ipv4", 00:14:23.754 "trsvcid": "$NVMF_PORT", 00:14:23.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:23.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:23.754 "hdgst": ${hdgst:-false}, 00:14:23.754 "ddgst": ${ddgst:-false} 00:14:23.754 }, 00:14:23.754 "method": "bdev_nvme_attach_controller" 00:14:23.754 } 00:14:23.754 EOF 00:14:23.754 )") 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:23.754 00:50:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:23.754 "params": { 00:14:23.754 "name": "Nvme1", 00:14:23.754 "trtype": "tcp", 00:14:23.754 "traddr": "10.0.0.2", 00:14:23.754 "adrfam": "ipv4", 00:14:23.754 "trsvcid": "4420", 00:14:23.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.754 "hdgst": false, 00:14:23.754 "ddgst": false 00:14:23.754 }, 00:14:23.754 "method": "bdev_nvme_attach_controller" 00:14:23.754 }' 00:14:24.011 [2024-07-16 00:50:58.519742] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:14:24.011 [2024-07-16 00:50:58.519831] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631638 ] 00:14:24.011 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.011 [2024-07-16 00:50:58.580306] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.011 [2024-07-16 00:50:58.692290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.011 [2024-07-16 00:50:58.692343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.011 [2024-07-16 00:50:58.692346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.268 I/O targets: 00:14:24.268 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:24.268 00:14:24.268 00:14:24.268 CUnit - A unit testing framework for C - Version 2.1-3 00:14:24.268 http://cunit.sourceforge.net/ 00:14:24.268 00:14:24.268 00:14:24.268 Suite: bdevio tests on: Nvme1n1 00:14:24.268 Test: blockdev write read block ...passed 00:14:24.268 Test: blockdev write zeroes read block ...passed 00:14:24.268 Test: blockdev write zeroes read no split ...passed 00:14:24.525 Test: blockdev write zeroes read split ...passed 00:14:24.525 Test: blockdev write zeroes read split partial ...passed 00:14:24.525 Test: blockdev reset ...[2024-07-16 00:50:59.121810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:24.525 [2024-07-16 00:50:59.121939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119e6d0 (9): Bad file descriptor 00:14:24.525 [2024-07-16 00:50:59.137744] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:24.525 passed 00:14:24.525 Test: blockdev write read 8 blocks ...passed 00:14:24.525 Test: blockdev write read size > 128k ...passed 00:14:24.525 Test: blockdev write read invalid size ...passed 00:14:24.525 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:24.525 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:24.525 Test: blockdev write read max offset ...passed 00:14:24.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:24.782 Test: blockdev writev readv 8 blocks ...passed 00:14:24.782 Test: blockdev writev readv 30 x 1block ...passed 00:14:24.782 Test: blockdev writev readv block ...passed 00:14:24.782 Test: blockdev writev readv size > 128k ...passed 00:14:24.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:24.782 Test: blockdev comparev and writev ...[2024-07-16 00:50:59.399082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.399118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.399142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.399158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.399588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.399614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.399637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.399653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.400053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.400078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.400100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.400117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.400545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.400570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.400600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:24.782 [2024-07-16 00:50:59.400617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:24.782 passed 00:14:24.782 Test: blockdev nvme passthru rw ...passed 00:14:24.782 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:50:59.484269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.782 [2024-07-16 00:50:59.484297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.484499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.782 [2024-07-16 00:50:59.484523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.484723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.782 [2024-07-16 00:50:59.484747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:24.782 [2024-07-16 00:50:59.484955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:24.782 [2024-07-16 00:50:59.484980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:24.782 passed 00:14:24.783 Test: blockdev nvme admin passthru ...passed 00:14:25.041 Test: blockdev copy ...passed 00:14:25.041 00:14:25.041 Run Summary: Type Total Ran Passed Failed Inactive 00:14:25.041 suites 1 1 n/a 0 0 00:14:25.041 tests 23 23 23 0 0 00:14:25.041 asserts 152 152 152 0 n/a 00:14:25.041 00:14:25.041 Elapsed time = 1.266 seconds 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.041 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.298 rmmod nvme_tcp 00:14:25.298 rmmod nvme_fabrics 00:14:25.298 rmmod nvme_keyring 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2631485 ']' 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2631485 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2631485 ']' 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2631485 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2631485 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2631485' 00:14:25.298 killing process with pid 2631485 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2631485 00:14:25.298 00:50:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2631485 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.556 00:51:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.456 00:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.456 00:14:27.456 real 0m6.366s 00:14:27.456 user 0m10.473s 00:14:27.456 sys 0m2.032s 00:14:27.456 00:51:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.456 00:51:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.456 ************************************ 00:14:27.456 END TEST nvmf_bdevio 00:14:27.456 ************************************ 00:14:27.714 00:51:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.714 00:51:02 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:27.714 00:51:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:27.714 00:51:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.714 00:51:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.714 ************************************ 00:14:27.714 START TEST nvmf_auth_target 00:14:27.714 ************************************ 00:14:27.714 00:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:27.714 * Looking for test storage... 00:14:27.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.714 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.714 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:27.714 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.714 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.715 00:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.617 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:29.618 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:29.618 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:29.618 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:29.618 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:14:29.618 00:14:29.618 --- 10.0.0.2 ping statistics --- 00:14:29.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.618 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:14:29.618 00:14:29.618 --- 10.0.0.1 ping statistics --- 00:14:29.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.618 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.618 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2633821 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2633821 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2633821 ']' 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.877 00:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2633970 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa2fe16624d527bf47d5e012eb9df3b80b863559e5b01de7 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bvC 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa2fe16624d527bf47d5e012eb9df3b80b863559e5b01de7 0 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa2fe16624d527bf47d5e012eb9df3b80b863559e5b01de7 0 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa2fe16624d527bf47d5e012eb9df3b80b863559e5b01de7 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bvC 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bvC 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.bvC 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19cc91125f76871bd5213b72e4d7a9ddf17326e1983ce48543cbd587ef4a1fb8 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Oak 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19cc91125f76871bd5213b72e4d7a9ddf17326e1983ce48543cbd587ef4a1fb8 3 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19cc91125f76871bd5213b72e4d7a9ddf17326e1983ce48543cbd587ef4a1fb8 3 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19cc91125f76871bd5213b72e4d7a9ddf17326e1983ce48543cbd587ef4a1fb8 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Oak 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Oak 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Oak 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4235cfdcd80f8c91355a057862240959 00:14:30.812 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ba3 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4235cfdcd80f8c91355a057862240959 1 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4235cfdcd80f8c91355a057862240959 1 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4235cfdcd80f8c91355a057862240959 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:30.813 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ba3 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ba3 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ba3 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3d19f3085da1473b4b162f3133fda59422979a89a078b67a 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7Qu 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3d19f3085da1473b4b162f3133fda59422979a89a078b67a 2 00:14:31.071 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3d19f3085da1473b4b162f3133fda59422979a89a078b67a 2 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3d19f3085da1473b4b162f3133fda59422979a89a078b67a 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7Qu 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7Qu 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.7Qu 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3008ec923efb6fca6eb5d36897d8f5473046a546576bbd29 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1V3 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3008ec923efb6fca6eb5d36897d8f5473046a546576bbd29 2 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3008ec923efb6fca6eb5d36897d8f5473046a546576bbd29 2 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3008ec923efb6fca6eb5d36897d8f5473046a546576bbd29 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1V3 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1V3 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.1V3 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0c819afde84556908a91a184b8779841 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rLY 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0c819afde84556908a91a184b8779841 1 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0c819afde84556908a91a184b8779841 1 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0c819afde84556908a91a184b8779841 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rLY 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rLY 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rLY 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a071a33bd568db6c8e50a2847512372d9662383e0edb9cf3a0c042d325eb61df 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1Ik 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a071a33bd568db6c8e50a2847512372d9662383e0edb9cf3a0c042d325eb61df 3 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a071a33bd568db6c8e50a2847512372d9662383e0edb9cf3a0c042d325eb61df 3 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a071a33bd568db6c8e50a2847512372d9662383e0edb9cf3a0c042d325eb61df 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:31.072 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1Ik 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1Ik 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.1Ik 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2633821 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2633821 ']' 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.331 00:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2633970 /var/tmp/host.sock 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2633970 ']' 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.589 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bvC 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.bvC 00:14:31.847 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.bvC 00:14:32.105 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Oak ]] 00:14:32.105 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Oak 00:14:32.105 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.105 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.105 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.105 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Oak 00:14:32.105 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Oak 00:14:32.363 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:32.363 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ba3 00:14:32.363 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.363 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.363 00:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.363 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ba3 00:14:32.363 00:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ba3 00:14:32.621 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.7Qu ]] 00:14:32.621 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Qu 00:14:32.621 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.621 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.621 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.621 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Qu 00:14:32.621 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Qu 00:14:32.880 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:32.880 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1V3 00:14:32.880 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.880 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.880 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.880 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.1V3 00:14:32.880 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.1V3 00:14:33.138 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rLY ]] 00:14:33.138 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rLY 00:14:33.138 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.138 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.138 00:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.138 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rLY 00:14:33.138 00:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rLY 00:14:33.395 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:33.395 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1Ik 00:14:33.395 00:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.395 00:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.395 00:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.395 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1Ik 00:14:33.395 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1Ik 00:14:33.653 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:33.653 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:33.653 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.653 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.653 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.653 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.911 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.169 00:14:34.169 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.169 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.169 00:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.426 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.426 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.426 00:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.426 00:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.426 00:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.426 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.426 { 00:14:34.426 "cntlid": 1, 00:14:34.426 "qid": 0, 00:14:34.426 "state": "enabled", 00:14:34.426 "thread": "nvmf_tgt_poll_group_000", 00:14:34.426 "listen_address": { 00:14:34.426 "trtype": "TCP", 00:14:34.426 "adrfam": "IPv4", 00:14:34.426 "traddr": "10.0.0.2", 00:14:34.426 "trsvcid": "4420" 00:14:34.426 }, 00:14:34.426 "peer_address": { 00:14:34.426 "trtype": "TCP", 00:14:34.426 "adrfam": "IPv4", 00:14:34.426 "traddr": "10.0.0.1", 00:14:34.426 "trsvcid": "46776" 00:14:34.426 }, 00:14:34.426 "auth": { 00:14:34.426 "state": "completed", 00:14:34.426 "digest": "sha256", 00:14:34.426 "dhgroup": "null" 00:14:34.426 } 00:14:34.426 } 00:14:34.426 ]' 00:14:34.426 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.684 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.684 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.684 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:34.684 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.684 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.684 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.684 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.942 00:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:35.874 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.133 00:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.431 00:14:36.431 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.431 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.431 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.689 { 00:14:36.689 "cntlid": 3, 00:14:36.689 "qid": 0, 00:14:36.689 "state": "enabled", 00:14:36.689 "thread": "nvmf_tgt_poll_group_000", 00:14:36.689 "listen_address": { 00:14:36.689 "trtype": "TCP", 00:14:36.689 "adrfam": "IPv4", 00:14:36.689 "traddr": "10.0.0.2", 00:14:36.689 "trsvcid": "4420" 00:14:36.689 }, 00:14:36.689 "peer_address": { 00:14:36.689 "trtype": "TCP", 00:14:36.689 "adrfam": "IPv4", 00:14:36.689 "traddr": "10.0.0.1", 00:14:36.689 "trsvcid": "46800" 00:14:36.689 }, 00:14:36.689 "auth": { 00:14:36.689 "state": "completed", 00:14:36.689 "digest": "sha256", 00:14:36.689 "dhgroup": "null" 00:14:36.689 } 00:14:36.689 } 00:14:36.689 ]' 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.689 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.947 00:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:14:38.325 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.325 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.326 00:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.584 00:14:38.584 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.584 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.584 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.841 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.841 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.841 00:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.841 00:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.098 { 00:14:39.098 "cntlid": 5, 00:14:39.098 "qid": 0, 00:14:39.098 "state": "enabled", 00:14:39.098 "thread": "nvmf_tgt_poll_group_000", 00:14:39.098 "listen_address": { 00:14:39.098 "trtype": "TCP", 00:14:39.098 "adrfam": "IPv4", 00:14:39.098 "traddr": "10.0.0.2", 00:14:39.098 "trsvcid": "4420" 00:14:39.098 }, 00:14:39.098 "peer_address": { 00:14:39.098 "trtype": "TCP", 00:14:39.098 "adrfam": "IPv4", 00:14:39.098 "traddr": "10.0.0.1", 00:14:39.098 "trsvcid": "42062" 00:14:39.098 }, 00:14:39.098 "auth": { 00:14:39.098 "state": "completed", 00:14:39.098 "digest": "sha256", 00:14:39.098 "dhgroup": "null" 00:14:39.098 } 00:14:39.098 } 00:14:39.098 ]' 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.098 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.356 00:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.288 00:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.546 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.804 00:14:41.062 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.062 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.062 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.062 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.062 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.062 00:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.062 00:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.319 00:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.319 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.319 { 00:14:41.319 "cntlid": 7, 00:14:41.319 "qid": 0, 00:14:41.319 "state": "enabled", 00:14:41.319 "thread": "nvmf_tgt_poll_group_000", 00:14:41.319 "listen_address": { 00:14:41.319 "trtype": "TCP", 00:14:41.319 "adrfam": "IPv4", 00:14:41.319 "traddr": "10.0.0.2", 00:14:41.319 "trsvcid": "4420" 00:14:41.319 }, 00:14:41.319 "peer_address": { 00:14:41.319 "trtype": "TCP", 00:14:41.319 "adrfam": "IPv4", 00:14:41.319 "traddr": "10.0.0.1", 00:14:41.319 "trsvcid": "42092" 00:14:41.319 }, 00:14:41.319 "auth": { 00:14:41.320 "state": "completed", 00:14:41.320 "digest": "sha256", 00:14:41.320 "dhgroup": "null" 00:14:41.320 } 00:14:41.320 } 00:14:41.320 ]' 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.320 00:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.577 00:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.510 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.768 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.334 00:14:43.334 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.334 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.334 00:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.334 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.334 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.334 00:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.334 00:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.334 00:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.334 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.334 { 00:14:43.334 "cntlid": 9, 00:14:43.334 "qid": 0, 00:14:43.334 "state": "enabled", 00:14:43.334 "thread": "nvmf_tgt_poll_group_000", 00:14:43.334 "listen_address": { 00:14:43.334 "trtype": "TCP", 00:14:43.334 "adrfam": "IPv4", 00:14:43.334 "traddr": "10.0.0.2", 00:14:43.334 "trsvcid": "4420" 00:14:43.334 }, 00:14:43.334 "peer_address": { 00:14:43.334 "trtype": "TCP", 00:14:43.334 "adrfam": "IPv4", 00:14:43.334 "traddr": "10.0.0.1", 00:14:43.334 "trsvcid": "42116" 00:14:43.334 }, 00:14:43.334 "auth": { 00:14:43.334 "state": "completed", 00:14:43.334 "digest": "sha256", 00:14:43.334 "dhgroup": "ffdhe2048" 00:14:43.334 } 00:14:43.334 } 00:14:43.334 ]' 00:14:43.334 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.592 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.592 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.592 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.592 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.592 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.592 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.592 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.851 00:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.784 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.042 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:45.042 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.042 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.043 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.301 00:14:45.301 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.301 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.301 00:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.562 { 00:14:45.562 "cntlid": 11, 00:14:45.562 "qid": 0, 00:14:45.562 "state": "enabled", 00:14:45.562 "thread": "nvmf_tgt_poll_group_000", 00:14:45.562 "listen_address": { 00:14:45.562 "trtype": "TCP", 00:14:45.562 "adrfam": "IPv4", 00:14:45.562 "traddr": "10.0.0.2", 00:14:45.562 "trsvcid": "4420" 00:14:45.562 }, 00:14:45.562 "peer_address": { 00:14:45.562 "trtype": "TCP", 00:14:45.562 "adrfam": "IPv4", 00:14:45.562 "traddr": "10.0.0.1", 00:14:45.562 "trsvcid": "42134" 00:14:45.562 }, 00:14:45.562 "auth": { 00:14:45.562 "state": "completed", 00:14:45.562 "digest": "sha256", 00:14:45.562 "dhgroup": "ffdhe2048" 00:14:45.562 } 00:14:45.562 } 00:14:45.562 ]' 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.562 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.822 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.822 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.822 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.822 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.822 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.080 00:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.014 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.272 00:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.530 00:14:47.530 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.530 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.530 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.788 { 00:14:47.788 "cntlid": 13, 00:14:47.788 "qid": 0, 00:14:47.788 "state": "enabled", 00:14:47.788 "thread": "nvmf_tgt_poll_group_000", 00:14:47.788 "listen_address": { 00:14:47.788 "trtype": "TCP", 00:14:47.788 "adrfam": "IPv4", 00:14:47.788 "traddr": "10.0.0.2", 00:14:47.788 "trsvcid": "4420" 00:14:47.788 }, 00:14:47.788 "peer_address": { 00:14:47.788 "trtype": "TCP", 00:14:47.788 "adrfam": "IPv4", 00:14:47.788 "traddr": "10.0.0.1", 00:14:47.788 "trsvcid": "42510" 00:14:47.788 }, 00:14:47.788 "auth": { 00:14:47.788 "state": "completed", 00:14:47.788 "digest": "sha256", 00:14:47.788 "dhgroup": "ffdhe2048" 00:14:47.788 } 00:14:47.788 } 00:14:47.788 ]' 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.788 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.045 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.045 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.045 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.303 00:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.236 00:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.494 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.752 00:14:49.752 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.752 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.752 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.010 { 00:14:50.010 "cntlid": 15, 00:14:50.010 "qid": 0, 00:14:50.010 "state": "enabled", 00:14:50.010 "thread": "nvmf_tgt_poll_group_000", 00:14:50.010 "listen_address": { 00:14:50.010 "trtype": "TCP", 00:14:50.010 "adrfam": "IPv4", 00:14:50.010 "traddr": "10.0.0.2", 00:14:50.010 "trsvcid": "4420" 00:14:50.010 }, 00:14:50.010 "peer_address": { 00:14:50.010 "trtype": "TCP", 00:14:50.010 "adrfam": "IPv4", 00:14:50.010 "traddr": "10.0.0.1", 00:14:50.010 "trsvcid": "42536" 00:14:50.010 }, 00:14:50.010 "auth": { 00:14:50.010 "state": "completed", 00:14:50.010 "digest": "sha256", 00:14:50.010 "dhgroup": "ffdhe2048" 00:14:50.010 } 00:14:50.010 } 00:14:50.010 ]' 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.010 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.285 00:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.231 00:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.488 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.053 00:14:52.053 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.053 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.053 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.310 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.310 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.310 00:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.311 { 00:14:52.311 "cntlid": 17, 00:14:52.311 "qid": 0, 00:14:52.311 "state": "enabled", 00:14:52.311 "thread": "nvmf_tgt_poll_group_000", 00:14:52.311 "listen_address": { 00:14:52.311 "trtype": "TCP", 00:14:52.311 "adrfam": "IPv4", 00:14:52.311 "traddr": "10.0.0.2", 00:14:52.311 "trsvcid": "4420" 00:14:52.311 }, 00:14:52.311 "peer_address": { 00:14:52.311 "trtype": "TCP", 00:14:52.311 "adrfam": "IPv4", 00:14:52.311 "traddr": "10.0.0.1", 00:14:52.311 "trsvcid": "42558" 00:14:52.311 }, 00:14:52.311 "auth": { 00:14:52.311 "state": "completed", 00:14:52.311 "digest": "sha256", 00:14:52.311 "dhgroup": "ffdhe3072" 00:14:52.311 } 00:14:52.311 } 00:14:52.311 ]' 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.311 00:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.568 00:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.500 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.757 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.322 00:14:54.322 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.322 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.322 00:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.322 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.322 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.322 00:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.322 00:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.322 00:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.322 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.322 { 00:14:54.322 "cntlid": 19, 00:14:54.322 "qid": 0, 00:14:54.322 "state": "enabled", 00:14:54.322 "thread": "nvmf_tgt_poll_group_000", 00:14:54.322 "listen_address": { 00:14:54.322 "trtype": "TCP", 00:14:54.322 "adrfam": "IPv4", 00:14:54.322 "traddr": "10.0.0.2", 00:14:54.322 "trsvcid": "4420" 00:14:54.322 }, 00:14:54.322 "peer_address": { 00:14:54.322 "trtype": "TCP", 00:14:54.322 "adrfam": "IPv4", 00:14:54.322 "traddr": "10.0.0.1", 00:14:54.322 "trsvcid": "42588" 00:14:54.322 }, 00:14:54.322 "auth": { 00:14:54.322 "state": "completed", 00:14:54.322 "digest": "sha256", 00:14:54.322 "dhgroup": "ffdhe3072" 00:14:54.322 } 00:14:54.322 } 00:14:54.322 ]' 00:14:54.322 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.580 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.580 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.580 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.580 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.580 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.580 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.580 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.838 00:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.770 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.028 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.285 00:14:56.285 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.285 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.285 00:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.543 { 00:14:56.543 "cntlid": 21, 00:14:56.543 "qid": 0, 00:14:56.543 "state": "enabled", 00:14:56.543 "thread": "nvmf_tgt_poll_group_000", 00:14:56.543 "listen_address": { 00:14:56.543 "trtype": "TCP", 00:14:56.543 "adrfam": "IPv4", 00:14:56.543 "traddr": "10.0.0.2", 00:14:56.543 "trsvcid": "4420" 00:14:56.543 }, 00:14:56.543 "peer_address": { 00:14:56.543 "trtype": "TCP", 00:14:56.543 "adrfam": "IPv4", 00:14:56.543 "traddr": "10.0.0.1", 00:14:56.543 "trsvcid": "42596" 00:14:56.543 }, 00:14:56.543 "auth": { 00:14:56.543 "state": "completed", 00:14:56.543 "digest": "sha256", 00:14:56.543 "dhgroup": "ffdhe3072" 00:14:56.543 } 00:14:56.543 } 00:14:56.543 ]' 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.543 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.800 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.800 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.800 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.058 00:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.989 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.247 00:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.505 00:14:58.505 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.505 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.505 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.761 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.761 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.762 { 00:14:58.762 "cntlid": 23, 00:14:58.762 "qid": 0, 00:14:58.762 "state": "enabled", 00:14:58.762 "thread": "nvmf_tgt_poll_group_000", 00:14:58.762 "listen_address": { 00:14:58.762 "trtype": "TCP", 00:14:58.762 "adrfam": "IPv4", 00:14:58.762 "traddr": "10.0.0.2", 00:14:58.762 "trsvcid": "4420" 00:14:58.762 }, 00:14:58.762 "peer_address": { 00:14:58.762 "trtype": "TCP", 00:14:58.762 "adrfam": "IPv4", 00:14:58.762 "traddr": "10.0.0.1", 00:14:58.762 "trsvcid": "40514" 00:14:58.762 }, 00:14:58.762 "auth": { 00:14:58.762 "state": "completed", 00:14:58.762 "digest": "sha256", 00:14:58.762 "dhgroup": "ffdhe3072" 00:14:58.762 } 00:14:58.762 } 00:14:58.762 ]' 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.762 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.019 00:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:15:00.391 00:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.391 00:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.391 00:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.392 00:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.392 00:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.392 00:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.392 00:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.392 00:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:00.392 00:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.392 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.957 00:15:00.957 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.957 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.957 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.214 { 00:15:01.214 "cntlid": 25, 00:15:01.214 "qid": 0, 00:15:01.214 "state": "enabled", 00:15:01.214 "thread": "nvmf_tgt_poll_group_000", 00:15:01.214 "listen_address": { 00:15:01.214 "trtype": "TCP", 00:15:01.214 "adrfam": "IPv4", 00:15:01.214 "traddr": "10.0.0.2", 00:15:01.214 "trsvcid": "4420" 00:15:01.214 }, 00:15:01.214 "peer_address": { 00:15:01.214 "trtype": "TCP", 00:15:01.214 "adrfam": "IPv4", 00:15:01.214 "traddr": "10.0.0.1", 00:15:01.214 "trsvcid": "40554" 00:15:01.214 }, 00:15:01.214 "auth": { 00:15:01.214 "state": "completed", 00:15:01.214 "digest": "sha256", 00:15:01.214 "dhgroup": "ffdhe4096" 00:15:01.214 } 00:15:01.214 } 00:15:01.214 ]' 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.214 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.215 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.215 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:01.215 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.215 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.215 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.215 00:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.472 00:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.405 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.663 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.227 00:15:03.227 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.227 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.227 00:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.485 { 00:15:03.485 "cntlid": 27, 00:15:03.485 "qid": 0, 00:15:03.485 "state": "enabled", 00:15:03.485 "thread": "nvmf_tgt_poll_group_000", 00:15:03.485 "listen_address": { 00:15:03.485 "trtype": "TCP", 00:15:03.485 "adrfam": "IPv4", 00:15:03.485 "traddr": "10.0.0.2", 00:15:03.485 "trsvcid": "4420" 00:15:03.485 }, 00:15:03.485 "peer_address": { 00:15:03.485 "trtype": "TCP", 00:15:03.485 "adrfam": "IPv4", 00:15:03.485 "traddr": "10.0.0.1", 00:15:03.485 "trsvcid": "40572" 00:15:03.485 }, 00:15:03.485 "auth": { 00:15:03.485 "state": "completed", 00:15:03.485 "digest": "sha256", 00:15:03.485 "dhgroup": "ffdhe4096" 00:15:03.485 } 00:15:03.485 } 00:15:03.485 ]' 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.485 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.743 00:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:04.704 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.269 00:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.527 00:15:05.527 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.527 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.527 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.784 { 00:15:05.784 "cntlid": 29, 00:15:05.784 "qid": 0, 00:15:05.784 "state": "enabled", 00:15:05.784 "thread": "nvmf_tgt_poll_group_000", 00:15:05.784 "listen_address": { 00:15:05.784 "trtype": "TCP", 00:15:05.784 "adrfam": "IPv4", 00:15:05.784 "traddr": "10.0.0.2", 00:15:05.784 "trsvcid": "4420" 00:15:05.784 }, 00:15:05.784 "peer_address": { 00:15:05.784 "trtype": "TCP", 00:15:05.784 "adrfam": "IPv4", 00:15:05.784 "traddr": "10.0.0.1", 00:15:05.784 "trsvcid": "40606" 00:15:05.784 }, 00:15:05.784 "auth": { 00:15:05.784 "state": "completed", 00:15:05.784 "digest": "sha256", 00:15:05.784 "dhgroup": "ffdhe4096" 00:15:05.784 } 00:15:05.784 } 00:15:05.784 ]' 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.784 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.349 00:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.282 00:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.539 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.796 00:15:07.796 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.796 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.796 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.053 { 00:15:08.053 "cntlid": 31, 00:15:08.053 "qid": 0, 00:15:08.053 "state": "enabled", 00:15:08.053 "thread": "nvmf_tgt_poll_group_000", 00:15:08.053 "listen_address": { 00:15:08.053 "trtype": "TCP", 00:15:08.053 "adrfam": "IPv4", 00:15:08.053 "traddr": "10.0.0.2", 00:15:08.053 "trsvcid": "4420" 00:15:08.053 }, 00:15:08.053 "peer_address": { 00:15:08.053 "trtype": "TCP", 00:15:08.053 "adrfam": "IPv4", 00:15:08.053 "traddr": "10.0.0.1", 00:15:08.053 "trsvcid": "37494" 00:15:08.053 }, 00:15:08.053 "auth": { 00:15:08.053 "state": "completed", 00:15:08.053 "digest": "sha256", 00:15:08.053 "dhgroup": "ffdhe4096" 00:15:08.053 } 00:15:08.053 } 00:15:08.053 ]' 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.053 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.309 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.309 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.309 00:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.567 00:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.499 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.757 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.321 00:15:10.321 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.321 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.321 00:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.578 { 00:15:10.578 "cntlid": 33, 00:15:10.578 "qid": 0, 00:15:10.578 "state": "enabled", 00:15:10.578 "thread": "nvmf_tgt_poll_group_000", 00:15:10.578 "listen_address": { 00:15:10.578 "trtype": "TCP", 00:15:10.578 "adrfam": "IPv4", 00:15:10.578 "traddr": "10.0.0.2", 00:15:10.578 "trsvcid": "4420" 00:15:10.578 }, 00:15:10.578 "peer_address": { 00:15:10.578 "trtype": "TCP", 00:15:10.578 "adrfam": "IPv4", 00:15:10.578 "traddr": "10.0.0.1", 00:15:10.578 "trsvcid": "37516" 00:15:10.578 }, 00:15:10.578 "auth": { 00:15:10.578 "state": "completed", 00:15:10.578 "digest": "sha256", 00:15:10.578 "dhgroup": "ffdhe6144" 00:15:10.578 } 00:15:10.578 } 00:15:10.578 ]' 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.578 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.835 00:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.770 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.336 00:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.593 00:15:12.593 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.593 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.593 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.851 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.851 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.851 00:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.851 00:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.851 00:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.851 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.851 { 00:15:12.851 "cntlid": 35, 00:15:12.851 "qid": 0, 00:15:12.851 "state": "enabled", 00:15:12.851 "thread": "nvmf_tgt_poll_group_000", 00:15:12.851 "listen_address": { 00:15:12.851 "trtype": "TCP", 00:15:12.851 "adrfam": "IPv4", 00:15:12.851 "traddr": "10.0.0.2", 00:15:12.851 "trsvcid": "4420" 00:15:12.851 }, 00:15:12.851 "peer_address": { 00:15:12.851 "trtype": "TCP", 00:15:12.851 "adrfam": "IPv4", 00:15:12.851 "traddr": "10.0.0.1", 00:15:12.851 "trsvcid": "37552" 00:15:12.851 }, 00:15:12.851 "auth": { 00:15:12.851 "state": "completed", 00:15:12.851 "digest": "sha256", 00:15:12.851 "dhgroup": "ffdhe6144" 00:15:12.851 } 00:15:12.851 } 00:15:12.851 ]' 00:15:12.851 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.109 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.109 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.109 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.109 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.109 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.109 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.109 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.367 00:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.300 00:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.558 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.123 00:15:15.123 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.123 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.123 00:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.381 { 00:15:15.381 "cntlid": 37, 00:15:15.381 "qid": 0, 00:15:15.381 "state": "enabled", 00:15:15.381 "thread": "nvmf_tgt_poll_group_000", 00:15:15.381 "listen_address": { 00:15:15.381 "trtype": "TCP", 00:15:15.381 "adrfam": "IPv4", 00:15:15.381 "traddr": "10.0.0.2", 00:15:15.381 "trsvcid": "4420" 00:15:15.381 }, 00:15:15.381 "peer_address": { 00:15:15.381 "trtype": "TCP", 00:15:15.381 "adrfam": "IPv4", 00:15:15.381 "traddr": "10.0.0.1", 00:15:15.381 "trsvcid": "37582" 00:15:15.381 }, 00:15:15.381 "auth": { 00:15:15.381 "state": "completed", 00:15:15.381 "digest": "sha256", 00:15:15.381 "dhgroup": "ffdhe6144" 00:15:15.381 } 00:15:15.381 } 00:15:15.381 ]' 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.381 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.639 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.639 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.639 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.639 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.639 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.897 00:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:16.829 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.087 00:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.652 00:15:17.652 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.652 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.652 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.913 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.913 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.913 00:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.914 00:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.914 00:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.914 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.914 { 00:15:17.914 "cntlid": 39, 00:15:17.914 "qid": 0, 00:15:17.914 "state": "enabled", 00:15:17.914 "thread": "nvmf_tgt_poll_group_000", 00:15:17.914 "listen_address": { 00:15:17.914 "trtype": "TCP", 00:15:17.914 "adrfam": "IPv4", 00:15:17.914 "traddr": "10.0.0.2", 00:15:17.914 "trsvcid": "4420" 00:15:17.914 }, 00:15:17.914 "peer_address": { 00:15:17.914 "trtype": "TCP", 00:15:17.914 "adrfam": "IPv4", 00:15:17.914 "traddr": "10.0.0.1", 00:15:17.914 "trsvcid": "59556" 00:15:17.914 }, 00:15:17.914 "auth": { 00:15:17.914 "state": "completed", 00:15:17.914 "digest": "sha256", 00:15:17.914 "dhgroup": "ffdhe6144" 00:15:17.914 } 00:15:17.914 } 00:15:17.914 ]' 00:15:17.914 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.210 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.210 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.210 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.210 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.210 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.210 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.210 00:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.468 00:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.409 00:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.667 00:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.668 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.668 00:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.607 00:15:20.607 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.607 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.607 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.865 { 00:15:20.865 "cntlid": 41, 00:15:20.865 "qid": 0, 00:15:20.865 "state": "enabled", 00:15:20.865 "thread": "nvmf_tgt_poll_group_000", 00:15:20.865 "listen_address": { 00:15:20.865 "trtype": "TCP", 00:15:20.865 "adrfam": "IPv4", 00:15:20.865 "traddr": "10.0.0.2", 00:15:20.865 "trsvcid": "4420" 00:15:20.865 }, 00:15:20.865 "peer_address": { 00:15:20.865 "trtype": "TCP", 00:15:20.865 "adrfam": "IPv4", 00:15:20.865 "traddr": "10.0.0.1", 00:15:20.865 "trsvcid": "59586" 00:15:20.865 }, 00:15:20.865 "auth": { 00:15:20.865 "state": "completed", 00:15:20.865 "digest": "sha256", 00:15:20.865 "dhgroup": "ffdhe8192" 00:15:20.865 } 00:15:20.865 } 00:15:20.865 ]' 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.865 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.123 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.123 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.123 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.381 00:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.318 00:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.576 00:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.513 00:15:23.513 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.513 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.513 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.771 { 00:15:23.771 "cntlid": 43, 00:15:23.771 "qid": 0, 00:15:23.771 "state": "enabled", 00:15:23.771 "thread": "nvmf_tgt_poll_group_000", 00:15:23.771 "listen_address": { 00:15:23.771 "trtype": "TCP", 00:15:23.771 "adrfam": "IPv4", 00:15:23.771 "traddr": "10.0.0.2", 00:15:23.771 "trsvcid": "4420" 00:15:23.771 }, 00:15:23.771 "peer_address": { 00:15:23.771 "trtype": "TCP", 00:15:23.771 "adrfam": "IPv4", 00:15:23.771 "traddr": "10.0.0.1", 00:15:23.771 "trsvcid": "59624" 00:15:23.771 }, 00:15:23.771 "auth": { 00:15:23.771 "state": "completed", 00:15:23.771 "digest": "sha256", 00:15:23.771 "dhgroup": "ffdhe8192" 00:15:23.771 } 00:15:23.771 } 00:15:23.771 ]' 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.771 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.030 00:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.406 00:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.406 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.340 00:15:26.340 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.340 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.340 00:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.597 { 00:15:26.597 "cntlid": 45, 00:15:26.597 "qid": 0, 00:15:26.597 "state": "enabled", 00:15:26.597 "thread": "nvmf_tgt_poll_group_000", 00:15:26.597 "listen_address": { 00:15:26.597 "trtype": "TCP", 00:15:26.597 "adrfam": "IPv4", 00:15:26.597 "traddr": "10.0.0.2", 00:15:26.597 "trsvcid": "4420" 00:15:26.597 }, 00:15:26.597 "peer_address": { 00:15:26.597 "trtype": "TCP", 00:15:26.597 "adrfam": "IPv4", 00:15:26.597 "traddr": "10.0.0.1", 00:15:26.597 "trsvcid": "59658" 00:15:26.597 }, 00:15:26.597 "auth": { 00:15:26.597 "state": "completed", 00:15:26.597 "digest": "sha256", 00:15:26.597 "dhgroup": "ffdhe8192" 00:15:26.597 } 00:15:26.597 } 00:15:26.597 ]' 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.597 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.856 00:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:27.791 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.076 00:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.025 00:15:29.025 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.025 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.025 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.283 { 00:15:29.283 "cntlid": 47, 00:15:29.283 "qid": 0, 00:15:29.283 "state": "enabled", 00:15:29.283 "thread": "nvmf_tgt_poll_group_000", 00:15:29.283 "listen_address": { 00:15:29.283 "trtype": "TCP", 00:15:29.283 "adrfam": "IPv4", 00:15:29.283 "traddr": "10.0.0.2", 00:15:29.283 "trsvcid": "4420" 00:15:29.283 }, 00:15:29.283 "peer_address": { 00:15:29.283 "trtype": "TCP", 00:15:29.283 "adrfam": "IPv4", 00:15:29.283 "traddr": "10.0.0.1", 00:15:29.283 "trsvcid": "33478" 00:15:29.283 }, 00:15:29.283 "auth": { 00:15:29.283 "state": "completed", 00:15:29.283 "digest": "sha256", 00:15:29.283 "dhgroup": "ffdhe8192" 00:15:29.283 } 00:15:29.283 } 00:15:29.283 ]' 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.283 00:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.283 00:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.283 00:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.540 00:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.541 00:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.541 00:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.799 00:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.732 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.989 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:30.989 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.989 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.989 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:30.989 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:30.989 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.990 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.990 00:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.990 00:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.990 00:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.990 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.990 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.247 00:15:31.247 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.247 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.247 00:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.504 { 00:15:31.504 "cntlid": 49, 00:15:31.504 "qid": 0, 00:15:31.504 "state": "enabled", 00:15:31.504 "thread": "nvmf_tgt_poll_group_000", 00:15:31.504 "listen_address": { 00:15:31.504 "trtype": "TCP", 00:15:31.504 "adrfam": "IPv4", 00:15:31.504 "traddr": "10.0.0.2", 00:15:31.504 "trsvcid": "4420" 00:15:31.504 }, 00:15:31.504 "peer_address": { 00:15:31.504 "trtype": "TCP", 00:15:31.504 "adrfam": "IPv4", 00:15:31.504 "traddr": "10.0.0.1", 00:15:31.504 "trsvcid": "33512" 00:15:31.504 }, 00:15:31.504 "auth": { 00:15:31.504 "state": "completed", 00:15:31.504 "digest": "sha384", 00:15:31.504 "dhgroup": "null" 00:15:31.504 } 00:15:31.504 } 00:15:31.504 ]' 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:31.504 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.762 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.762 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.762 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.020 00:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.014 00:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.273 00:15:33.532 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.532 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.532 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.532 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.532 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.532 00:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.532 00:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.790 00:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.790 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.790 { 00:15:33.790 "cntlid": 51, 00:15:33.790 "qid": 0, 00:15:33.790 "state": "enabled", 00:15:33.790 "thread": "nvmf_tgt_poll_group_000", 00:15:33.790 "listen_address": { 00:15:33.790 "trtype": "TCP", 00:15:33.790 "adrfam": "IPv4", 00:15:33.790 "traddr": "10.0.0.2", 00:15:33.790 "trsvcid": "4420" 00:15:33.790 }, 00:15:33.790 "peer_address": { 00:15:33.790 "trtype": "TCP", 00:15:33.790 "adrfam": "IPv4", 00:15:33.790 "traddr": "10.0.0.1", 00:15:33.790 "trsvcid": "33546" 00:15:33.790 }, 00:15:33.790 "auth": { 00:15:33.790 "state": "completed", 00:15:33.790 "digest": "sha384", 00:15:33.790 "dhgroup": "null" 00:15:33.790 } 00:15:33.790 } 00:15:33.790 ]' 00:15:33.790 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.790 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.790 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.790 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:33.791 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.791 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.791 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.791 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.049 00:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:34.985 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.243 00:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.502 00:15:35.502 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.502 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.502 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.760 { 00:15:35.760 "cntlid": 53, 00:15:35.760 "qid": 0, 00:15:35.760 "state": "enabled", 00:15:35.760 "thread": "nvmf_tgt_poll_group_000", 00:15:35.760 "listen_address": { 00:15:35.760 "trtype": "TCP", 00:15:35.760 "adrfam": "IPv4", 00:15:35.760 "traddr": "10.0.0.2", 00:15:35.760 "trsvcid": "4420" 00:15:35.760 }, 00:15:35.760 "peer_address": { 00:15:35.760 "trtype": "TCP", 00:15:35.760 "adrfam": "IPv4", 00:15:35.760 "traddr": "10.0.0.1", 00:15:35.760 "trsvcid": "33560" 00:15:35.760 }, 00:15:35.760 "auth": { 00:15:35.760 "state": "completed", 00:15:35.760 "digest": "sha384", 00:15:35.760 "dhgroup": "null" 00:15:35.760 } 00:15:35.760 } 00:15:35.760 ]' 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:35.760 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.017 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.017 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.017 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.275 00:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.213 00:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.471 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.728 00:15:37.728 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.728 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.728 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.985 { 00:15:37.985 "cntlid": 55, 00:15:37.985 "qid": 0, 00:15:37.985 "state": "enabled", 00:15:37.985 "thread": "nvmf_tgt_poll_group_000", 00:15:37.985 "listen_address": { 00:15:37.985 "trtype": "TCP", 00:15:37.985 "adrfam": "IPv4", 00:15:37.985 "traddr": "10.0.0.2", 00:15:37.985 "trsvcid": "4420" 00:15:37.985 }, 00:15:37.985 "peer_address": { 00:15:37.985 "trtype": "TCP", 00:15:37.985 "adrfam": "IPv4", 00:15:37.985 "traddr": "10.0.0.1", 00:15:37.985 "trsvcid": "50684" 00:15:37.985 }, 00:15:37.985 "auth": { 00:15:37.985 "state": "completed", 00:15:37.985 "digest": "sha384", 00:15:37.985 "dhgroup": "null" 00:15:37.985 } 00:15:37.985 } 00:15:37.985 ]' 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.985 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.242 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:38.242 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.242 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.242 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.242 00:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.500 00:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.440 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.698 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.955 00:15:39.955 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.955 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.955 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.213 { 00:15:40.213 "cntlid": 57, 00:15:40.213 "qid": 0, 00:15:40.213 "state": "enabled", 00:15:40.213 "thread": "nvmf_tgt_poll_group_000", 00:15:40.213 "listen_address": { 00:15:40.213 "trtype": "TCP", 00:15:40.213 "adrfam": "IPv4", 00:15:40.213 "traddr": "10.0.0.2", 00:15:40.213 "trsvcid": "4420" 00:15:40.213 }, 00:15:40.213 "peer_address": { 00:15:40.213 "trtype": "TCP", 00:15:40.213 "adrfam": "IPv4", 00:15:40.213 "traddr": "10.0.0.1", 00:15:40.213 "trsvcid": "50714" 00:15:40.213 }, 00:15:40.213 "auth": { 00:15:40.213 "state": "completed", 00:15:40.213 "digest": "sha384", 00:15:40.213 "dhgroup": "ffdhe2048" 00:15:40.213 } 00:15:40.213 } 00:15:40.213 ]' 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.213 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.472 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.472 00:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.472 00:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.472 00:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.472 00:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.731 00:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.667 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.925 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.184 00:15:42.184 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.184 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.184 00:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.441 { 00:15:42.441 "cntlid": 59, 00:15:42.441 "qid": 0, 00:15:42.441 "state": "enabled", 00:15:42.441 "thread": "nvmf_tgt_poll_group_000", 00:15:42.441 "listen_address": { 00:15:42.441 "trtype": "TCP", 00:15:42.441 "adrfam": "IPv4", 00:15:42.441 "traddr": "10.0.0.2", 00:15:42.441 "trsvcid": "4420" 00:15:42.441 }, 00:15:42.441 "peer_address": { 00:15:42.441 "trtype": "TCP", 00:15:42.441 "adrfam": "IPv4", 00:15:42.441 "traddr": "10.0.0.1", 00:15:42.441 "trsvcid": "50750" 00:15:42.441 }, 00:15:42.441 "auth": { 00:15:42.441 "state": "completed", 00:15:42.441 "digest": "sha384", 00:15:42.441 "dhgroup": "ffdhe2048" 00:15:42.441 } 00:15:42.441 } 00:15:42.441 ]' 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.441 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.701 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.701 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.701 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.701 00:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:15:43.636 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.895 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.895 00:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.895 00:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.895 00:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.895 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.895 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.895 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.153 00:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.413 00:15:44.413 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.413 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.413 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.672 { 00:15:44.672 "cntlid": 61, 00:15:44.672 "qid": 0, 00:15:44.672 "state": "enabled", 00:15:44.672 "thread": "nvmf_tgt_poll_group_000", 00:15:44.672 "listen_address": { 00:15:44.672 "trtype": "TCP", 00:15:44.672 "adrfam": "IPv4", 00:15:44.672 "traddr": "10.0.0.2", 00:15:44.672 "trsvcid": "4420" 00:15:44.672 }, 00:15:44.672 "peer_address": { 00:15:44.672 "trtype": "TCP", 00:15:44.672 "adrfam": "IPv4", 00:15:44.672 "traddr": "10.0.0.1", 00:15:44.672 "trsvcid": "50772" 00:15:44.672 }, 00:15:44.672 "auth": { 00:15:44.672 "state": "completed", 00:15:44.672 "digest": "sha384", 00:15:44.672 "dhgroup": "ffdhe2048" 00:15:44.672 } 00:15:44.672 } 00:15:44.672 ]' 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.672 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.930 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.930 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.930 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.930 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.930 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.194 00:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.213 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.472 00:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.730 00:15:46.730 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.730 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.730 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.988 { 00:15:46.988 "cntlid": 63, 00:15:46.988 "qid": 0, 00:15:46.988 "state": "enabled", 00:15:46.988 "thread": "nvmf_tgt_poll_group_000", 00:15:46.988 "listen_address": { 00:15:46.988 "trtype": "TCP", 00:15:46.988 "adrfam": "IPv4", 00:15:46.988 "traddr": "10.0.0.2", 00:15:46.988 "trsvcid": "4420" 00:15:46.988 }, 00:15:46.988 "peer_address": { 00:15:46.988 "trtype": "TCP", 00:15:46.988 "adrfam": "IPv4", 00:15:46.988 "traddr": "10.0.0.1", 00:15:46.988 "trsvcid": "60686" 00:15:46.988 }, 00:15:46.988 "auth": { 00:15:46.988 "state": "completed", 00:15:46.988 "digest": "sha384", 00:15:46.988 "dhgroup": "ffdhe2048" 00:15:46.988 } 00:15:46.988 } 00:15:46.988 ]' 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.988 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.246 00:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.181 00:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.440 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.007 00:15:49.007 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.007 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.007 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.265 { 00:15:49.265 "cntlid": 65, 00:15:49.265 "qid": 0, 00:15:49.265 "state": "enabled", 00:15:49.265 "thread": "nvmf_tgt_poll_group_000", 00:15:49.265 "listen_address": { 00:15:49.265 "trtype": "TCP", 00:15:49.265 "adrfam": "IPv4", 00:15:49.265 "traddr": "10.0.0.2", 00:15:49.265 "trsvcid": "4420" 00:15:49.265 }, 00:15:49.265 "peer_address": { 00:15:49.265 "trtype": "TCP", 00:15:49.265 "adrfam": "IPv4", 00:15:49.265 "traddr": "10.0.0.1", 00:15:49.265 "trsvcid": "60698" 00:15:49.265 }, 00:15:49.265 "auth": { 00:15:49.265 "state": "completed", 00:15:49.265 "digest": "sha384", 00:15:49.265 "dhgroup": "ffdhe3072" 00:15:49.265 } 00:15:49.265 } 00:15:49.265 ]' 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.265 00:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.523 00:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.458 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.025 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.284 00:15:51.284 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.284 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.284 00:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.542 { 00:15:51.542 "cntlid": 67, 00:15:51.542 "qid": 0, 00:15:51.542 "state": "enabled", 00:15:51.542 "thread": "nvmf_tgt_poll_group_000", 00:15:51.542 "listen_address": { 00:15:51.542 "trtype": "TCP", 00:15:51.542 "adrfam": "IPv4", 00:15:51.542 "traddr": "10.0.0.2", 00:15:51.542 "trsvcid": "4420" 00:15:51.542 }, 00:15:51.542 "peer_address": { 00:15:51.542 "trtype": "TCP", 00:15:51.542 "adrfam": "IPv4", 00:15:51.542 "traddr": "10.0.0.1", 00:15:51.542 "trsvcid": "60730" 00:15:51.542 }, 00:15:51.542 "auth": { 00:15:51.542 "state": "completed", 00:15:51.542 "digest": "sha384", 00:15:51.542 "dhgroup": "ffdhe3072" 00:15:51.542 } 00:15:51.542 } 00:15:51.542 ]' 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.542 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.800 00:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:52.734 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.992 00:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.558 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.558 { 00:15:53.558 "cntlid": 69, 00:15:53.558 "qid": 0, 00:15:53.558 "state": "enabled", 00:15:53.558 "thread": "nvmf_tgt_poll_group_000", 00:15:53.558 "listen_address": { 00:15:53.558 "trtype": "TCP", 00:15:53.558 "adrfam": "IPv4", 00:15:53.558 "traddr": "10.0.0.2", 00:15:53.558 "trsvcid": "4420" 00:15:53.558 }, 00:15:53.558 "peer_address": { 00:15:53.558 "trtype": "TCP", 00:15:53.558 "adrfam": "IPv4", 00:15:53.558 "traddr": "10.0.0.1", 00:15:53.558 "trsvcid": "60754" 00:15:53.558 }, 00:15:53.558 "auth": { 00:15:53.558 "state": "completed", 00:15:53.558 "digest": "sha384", 00:15:53.558 "dhgroup": "ffdhe3072" 00:15:53.558 } 00:15:53.558 } 00:15:53.558 ]' 00:15:53.558 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.816 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.816 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.816 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.816 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.816 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.816 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.816 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.074 00:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:15:55.009 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.009 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.010 00:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.010 00:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.010 00:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.010 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.010 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.010 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.267 00:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.831 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.831 { 00:15:55.831 "cntlid": 71, 00:15:55.831 "qid": 0, 00:15:55.831 "state": "enabled", 00:15:55.831 "thread": "nvmf_tgt_poll_group_000", 00:15:55.831 "listen_address": { 00:15:55.831 "trtype": "TCP", 00:15:55.831 "adrfam": "IPv4", 00:15:55.831 "traddr": "10.0.0.2", 00:15:55.831 "trsvcid": "4420" 00:15:55.831 }, 00:15:55.831 "peer_address": { 00:15:55.831 "trtype": "TCP", 00:15:55.831 "adrfam": "IPv4", 00:15:55.831 "traddr": "10.0.0.1", 00:15:55.831 "trsvcid": "60792" 00:15:55.831 }, 00:15:55.831 "auth": { 00:15:55.831 "state": "completed", 00:15:55.831 "digest": "sha384", 00:15:55.831 "dhgroup": "ffdhe3072" 00:15:55.831 } 00:15:55.831 } 00:15:55.831 ]' 00:15:55.831 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.089 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.089 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.089 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.089 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.089 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.089 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.089 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.346 00:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.277 00:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.534 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.791 00:15:57.791 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.791 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.791 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.048 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.048 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.048 00:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.048 00:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.048 00:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.048 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.048 { 00:15:58.048 "cntlid": 73, 00:15:58.048 "qid": 0, 00:15:58.048 "state": "enabled", 00:15:58.048 "thread": "nvmf_tgt_poll_group_000", 00:15:58.048 "listen_address": { 00:15:58.048 "trtype": "TCP", 00:15:58.048 "adrfam": "IPv4", 00:15:58.048 "traddr": "10.0.0.2", 00:15:58.048 "trsvcid": "4420" 00:15:58.048 }, 00:15:58.048 "peer_address": { 00:15:58.048 "trtype": "TCP", 00:15:58.048 "adrfam": "IPv4", 00:15:58.048 "traddr": "10.0.0.1", 00:15:58.048 "trsvcid": "56402" 00:15:58.048 }, 00:15:58.048 "auth": { 00:15:58.048 "state": "completed", 00:15:58.048 "digest": "sha384", 00:15:58.048 "dhgroup": "ffdhe4096" 00:15:58.048 } 00:15:58.048 } 00:15:58.048 ]' 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.305 00:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.562 00:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.522 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.779 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.343 00:16:00.343 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.343 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.343 00:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.601 { 00:16:00.601 "cntlid": 75, 00:16:00.601 "qid": 0, 00:16:00.601 "state": "enabled", 00:16:00.601 "thread": "nvmf_tgt_poll_group_000", 00:16:00.601 "listen_address": { 00:16:00.601 "trtype": "TCP", 00:16:00.601 "adrfam": "IPv4", 00:16:00.601 "traddr": "10.0.0.2", 00:16:00.601 "trsvcid": "4420" 00:16:00.601 }, 00:16:00.601 "peer_address": { 00:16:00.601 "trtype": "TCP", 00:16:00.601 "adrfam": "IPv4", 00:16:00.601 "traddr": "10.0.0.1", 00:16:00.601 "trsvcid": "56422" 00:16:00.601 }, 00:16:00.601 "auth": { 00:16:00.601 "state": "completed", 00:16:00.601 "digest": "sha384", 00:16:00.601 "dhgroup": "ffdhe4096" 00:16:00.601 } 00:16:00.601 } 00:16:00.601 ]' 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.601 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.859 00:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.792 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.050 00:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.615 00:16:02.615 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.615 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.615 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.615 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.615 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.615 00:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.615 00:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.873 { 00:16:02.873 "cntlid": 77, 00:16:02.873 "qid": 0, 00:16:02.873 "state": "enabled", 00:16:02.873 "thread": "nvmf_tgt_poll_group_000", 00:16:02.873 "listen_address": { 00:16:02.873 "trtype": "TCP", 00:16:02.873 "adrfam": "IPv4", 00:16:02.873 "traddr": "10.0.0.2", 00:16:02.873 "trsvcid": "4420" 00:16:02.873 }, 00:16:02.873 "peer_address": { 00:16:02.873 "trtype": "TCP", 00:16:02.873 "adrfam": "IPv4", 00:16:02.873 "traddr": "10.0.0.1", 00:16:02.873 "trsvcid": "56452" 00:16:02.873 }, 00:16:02.873 "auth": { 00:16:02.873 "state": "completed", 00:16:02.873 "digest": "sha384", 00:16:02.873 "dhgroup": "ffdhe4096" 00:16:02.873 } 00:16:02.873 } 00:16:02.873 ]' 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.873 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.130 00:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.064 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.322 00:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.322 00:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.322 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.322 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.888 00:16:04.888 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.888 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.888 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.146 { 00:16:05.146 "cntlid": 79, 00:16:05.146 "qid": 0, 00:16:05.146 "state": "enabled", 00:16:05.146 "thread": "nvmf_tgt_poll_group_000", 00:16:05.146 "listen_address": { 00:16:05.146 "trtype": "TCP", 00:16:05.146 "adrfam": "IPv4", 00:16:05.146 "traddr": "10.0.0.2", 00:16:05.146 "trsvcid": "4420" 00:16:05.146 }, 00:16:05.146 "peer_address": { 00:16:05.146 "trtype": "TCP", 00:16:05.146 "adrfam": "IPv4", 00:16:05.146 "traddr": "10.0.0.1", 00:16:05.146 "trsvcid": "56478" 00:16:05.146 }, 00:16:05.146 "auth": { 00:16:05.146 "state": "completed", 00:16:05.146 "digest": "sha384", 00:16:05.146 "dhgroup": "ffdhe4096" 00:16:05.146 } 00:16:05.146 } 00:16:05.146 ]' 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.146 00:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.404 00:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.337 00:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.596 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.162 00:16:07.162 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.162 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.162 00:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.420 { 00:16:07.420 "cntlid": 81, 00:16:07.420 "qid": 0, 00:16:07.420 "state": "enabled", 00:16:07.420 "thread": "nvmf_tgt_poll_group_000", 00:16:07.420 "listen_address": { 00:16:07.420 "trtype": "TCP", 00:16:07.420 "adrfam": "IPv4", 00:16:07.420 "traddr": "10.0.0.2", 00:16:07.420 "trsvcid": "4420" 00:16:07.420 }, 00:16:07.420 "peer_address": { 00:16:07.420 "trtype": "TCP", 00:16:07.420 "adrfam": "IPv4", 00:16:07.420 "traddr": "10.0.0.1", 00:16:07.420 "trsvcid": "57522" 00:16:07.420 }, 00:16:07.420 "auth": { 00:16:07.420 "state": "completed", 00:16:07.420 "digest": "sha384", 00:16:07.420 "dhgroup": "ffdhe6144" 00:16:07.420 } 00:16:07.420 } 00:16:07.420 ]' 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.420 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.678 00:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.064 00:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.631 00:16:09.631 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.631 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.631 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.934 { 00:16:09.934 "cntlid": 83, 00:16:09.934 "qid": 0, 00:16:09.934 "state": "enabled", 00:16:09.934 "thread": "nvmf_tgt_poll_group_000", 00:16:09.934 "listen_address": { 00:16:09.934 "trtype": "TCP", 00:16:09.934 "adrfam": "IPv4", 00:16:09.934 "traddr": "10.0.0.2", 00:16:09.934 "trsvcid": "4420" 00:16:09.934 }, 00:16:09.934 "peer_address": { 00:16:09.934 "trtype": "TCP", 00:16:09.934 "adrfam": "IPv4", 00:16:09.934 "traddr": "10.0.0.1", 00:16:09.934 "trsvcid": "57546" 00:16:09.934 }, 00:16:09.934 "auth": { 00:16:09.934 "state": "completed", 00:16:09.934 "digest": "sha384", 00:16:09.934 "dhgroup": "ffdhe6144" 00:16:09.934 } 00:16:09.934 } 00:16:09.934 ]' 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.934 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.232 00:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:16:11.603 00:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.603 00:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.603 00:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.603 00:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.604 00:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.604 00:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.604 00:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.604 00:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.604 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.166 00:16:12.166 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.166 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.166 00:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.423 { 00:16:12.423 "cntlid": 85, 00:16:12.423 "qid": 0, 00:16:12.423 "state": "enabled", 00:16:12.423 "thread": "nvmf_tgt_poll_group_000", 00:16:12.423 "listen_address": { 00:16:12.423 "trtype": "TCP", 00:16:12.423 "adrfam": "IPv4", 00:16:12.423 "traddr": "10.0.0.2", 00:16:12.423 "trsvcid": "4420" 00:16:12.423 }, 00:16:12.423 "peer_address": { 00:16:12.423 "trtype": "TCP", 00:16:12.423 "adrfam": "IPv4", 00:16:12.423 "traddr": "10.0.0.1", 00:16:12.423 "trsvcid": "57588" 00:16:12.423 }, 00:16:12.423 "auth": { 00:16:12.423 "state": "completed", 00:16:12.423 "digest": "sha384", 00:16:12.423 "dhgroup": "ffdhe6144" 00:16:12.423 } 00:16:12.423 } 00:16:12.423 ]' 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.423 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.680 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.680 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.680 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.937 00:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.866 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.122 00:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.685 00:16:14.685 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.685 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.685 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.943 { 00:16:14.943 "cntlid": 87, 00:16:14.943 "qid": 0, 00:16:14.943 "state": "enabled", 00:16:14.943 "thread": "nvmf_tgt_poll_group_000", 00:16:14.943 "listen_address": { 00:16:14.943 "trtype": "TCP", 00:16:14.943 "adrfam": "IPv4", 00:16:14.943 "traddr": "10.0.0.2", 00:16:14.943 "trsvcid": "4420" 00:16:14.943 }, 00:16:14.943 "peer_address": { 00:16:14.943 "trtype": "TCP", 00:16:14.943 "adrfam": "IPv4", 00:16:14.943 "traddr": "10.0.0.1", 00:16:14.943 "trsvcid": "57614" 00:16:14.943 }, 00:16:14.943 "auth": { 00:16:14.943 "state": "completed", 00:16:14.943 "digest": "sha384", 00:16:14.943 "dhgroup": "ffdhe6144" 00:16:14.943 } 00:16:14.943 } 00:16:14.943 ]' 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.943 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.200 00:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.133 00:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.390 00:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.324 00:16:17.324 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.324 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.324 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.582 { 00:16:17.582 "cntlid": 89, 00:16:17.582 "qid": 0, 00:16:17.582 "state": "enabled", 00:16:17.582 "thread": "nvmf_tgt_poll_group_000", 00:16:17.582 "listen_address": { 00:16:17.582 "trtype": "TCP", 00:16:17.582 "adrfam": "IPv4", 00:16:17.582 "traddr": "10.0.0.2", 00:16:17.582 "trsvcid": "4420" 00:16:17.582 }, 00:16:17.582 "peer_address": { 00:16:17.582 "trtype": "TCP", 00:16:17.582 "adrfam": "IPv4", 00:16:17.582 "traddr": "10.0.0.1", 00:16:17.582 "trsvcid": "56292" 00:16:17.582 }, 00:16:17.582 "auth": { 00:16:17.582 "state": "completed", 00:16:17.582 "digest": "sha384", 00:16:17.582 "dhgroup": "ffdhe8192" 00:16:17.582 } 00:16:17.582 } 00:16:17.582 ]' 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.582 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.840 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.840 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.840 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.840 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.840 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.097 00:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.029 00:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.287 00:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.544 00:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.544 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.544 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.481 00:16:20.481 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.481 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.481 00:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.481 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.481 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.481 00:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.481 00:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.739 { 00:16:20.739 "cntlid": 91, 00:16:20.739 "qid": 0, 00:16:20.739 "state": "enabled", 00:16:20.739 "thread": "nvmf_tgt_poll_group_000", 00:16:20.739 "listen_address": { 00:16:20.739 "trtype": "TCP", 00:16:20.739 "adrfam": "IPv4", 00:16:20.739 "traddr": "10.0.0.2", 00:16:20.739 "trsvcid": "4420" 00:16:20.739 }, 00:16:20.739 "peer_address": { 00:16:20.739 "trtype": "TCP", 00:16:20.739 "adrfam": "IPv4", 00:16:20.739 "traddr": "10.0.0.1", 00:16:20.739 "trsvcid": "56316" 00:16:20.739 }, 00:16:20.739 "auth": { 00:16:20.739 "state": "completed", 00:16:20.739 "digest": "sha384", 00:16:20.739 "dhgroup": "ffdhe8192" 00:16:20.739 } 00:16:20.739 } 00:16:20.739 ]' 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.739 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.996 00:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.928 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.186 00:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.114 00:16:23.114 00:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.114 00:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.114 00:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.371 00:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.371 00:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.371 00:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.371 00:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.371 00:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.371 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.371 { 00:16:23.371 "cntlid": 93, 00:16:23.371 "qid": 0, 00:16:23.371 "state": "enabled", 00:16:23.371 "thread": "nvmf_tgt_poll_group_000", 00:16:23.371 "listen_address": { 00:16:23.371 "trtype": "TCP", 00:16:23.371 "adrfam": "IPv4", 00:16:23.371 "traddr": "10.0.0.2", 00:16:23.371 "trsvcid": "4420" 00:16:23.371 }, 00:16:23.371 "peer_address": { 00:16:23.371 "trtype": "TCP", 00:16:23.371 "adrfam": "IPv4", 00:16:23.371 "traddr": "10.0.0.1", 00:16:23.371 "trsvcid": "56332" 00:16:23.371 }, 00:16:23.371 "auth": { 00:16:23.371 "state": "completed", 00:16:23.371 "digest": "sha384", 00:16:23.371 "dhgroup": "ffdhe8192" 00:16:23.371 } 00:16:23.371 } 00:16:23.371 ]' 00:16:23.371 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.371 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.371 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.371 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.371 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.628 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.628 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.628 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.885 00:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.818 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.075 00:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.007 00:16:26.007 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.007 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.007 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.264 { 00:16:26.264 "cntlid": 95, 00:16:26.264 "qid": 0, 00:16:26.264 "state": "enabled", 00:16:26.264 "thread": "nvmf_tgt_poll_group_000", 00:16:26.264 "listen_address": { 00:16:26.264 "trtype": "TCP", 00:16:26.264 "adrfam": "IPv4", 00:16:26.264 "traddr": "10.0.0.2", 00:16:26.264 "trsvcid": "4420" 00:16:26.264 }, 00:16:26.264 "peer_address": { 00:16:26.264 "trtype": "TCP", 00:16:26.264 "adrfam": "IPv4", 00:16:26.264 "traddr": "10.0.0.1", 00:16:26.264 "trsvcid": "56364" 00:16:26.264 }, 00:16:26.264 "auth": { 00:16:26.264 "state": "completed", 00:16:26.264 "digest": "sha384", 00:16:26.264 "dhgroup": "ffdhe8192" 00:16:26.264 } 00:16:26.264 } 00:16:26.264 ]' 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.264 00:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.521 00:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.521 00:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.521 00:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.778 00:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.709 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.965 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.223 00:16:28.223 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.223 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.223 00:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.479 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.479 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.480 { 00:16:28.480 "cntlid": 97, 00:16:28.480 "qid": 0, 00:16:28.480 "state": "enabled", 00:16:28.480 "thread": "nvmf_tgt_poll_group_000", 00:16:28.480 "listen_address": { 00:16:28.480 "trtype": "TCP", 00:16:28.480 "adrfam": "IPv4", 00:16:28.480 "traddr": "10.0.0.2", 00:16:28.480 "trsvcid": "4420" 00:16:28.480 }, 00:16:28.480 "peer_address": { 00:16:28.480 "trtype": "TCP", 00:16:28.480 "adrfam": "IPv4", 00:16:28.480 "traddr": "10.0.0.1", 00:16:28.480 "trsvcid": "58222" 00:16:28.480 }, 00:16:28.480 "auth": { 00:16:28.480 "state": "completed", 00:16:28.480 "digest": "sha512", 00:16:28.480 "dhgroup": "null" 00:16:28.480 } 00:16:28.480 } 00:16:28.480 ]' 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.480 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.737 00:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.699 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.957 00:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.522 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.522 { 00:16:30.522 "cntlid": 99, 00:16:30.522 "qid": 0, 00:16:30.522 "state": "enabled", 00:16:30.522 "thread": "nvmf_tgt_poll_group_000", 00:16:30.522 "listen_address": { 00:16:30.522 "trtype": "TCP", 00:16:30.522 "adrfam": "IPv4", 00:16:30.522 "traddr": "10.0.0.2", 00:16:30.522 "trsvcid": "4420" 00:16:30.522 }, 00:16:30.522 "peer_address": { 00:16:30.522 "trtype": "TCP", 00:16:30.522 "adrfam": "IPv4", 00:16:30.522 "traddr": "10.0.0.1", 00:16:30.522 "trsvcid": "58244" 00:16:30.522 }, 00:16:30.522 "auth": { 00:16:30.522 "state": "completed", 00:16:30.522 "digest": "sha512", 00:16:30.522 "dhgroup": "null" 00:16:30.522 } 00:16:30.522 } 00:16:30.522 ]' 00:16:30.522 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.780 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.780 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.780 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.780 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.780 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.780 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.780 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.037 00:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.970 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.227 00:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.485 00:16:32.485 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.485 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.485 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.742 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.742 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.742 00:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.742 00:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 00:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.742 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.742 { 00:16:32.742 "cntlid": 101, 00:16:32.742 "qid": 0, 00:16:32.742 "state": "enabled", 00:16:32.742 "thread": "nvmf_tgt_poll_group_000", 00:16:32.742 "listen_address": { 00:16:32.742 "trtype": "TCP", 00:16:32.742 "adrfam": "IPv4", 00:16:32.742 "traddr": "10.0.0.2", 00:16:32.742 "trsvcid": "4420" 00:16:32.742 }, 00:16:32.742 "peer_address": { 00:16:32.742 "trtype": "TCP", 00:16:32.742 "adrfam": "IPv4", 00:16:32.742 "traddr": "10.0.0.1", 00:16:32.742 "trsvcid": "58282" 00:16:32.742 }, 00:16:32.742 "auth": { 00:16:32.742 "state": "completed", 00:16:32.742 "digest": "sha512", 00:16:32.742 "dhgroup": "null" 00:16:32.742 } 00:16:32.742 } 00:16:32.742 ]' 00:16:32.742 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.999 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.999 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.999 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:32.999 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.999 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.999 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.999 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.257 00:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:34.191 00:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.449 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.706 00:16:34.706 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.706 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.706 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.964 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.964 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.964 00:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.964 00:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.964 00:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.964 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.964 { 00:16:34.964 "cntlid": 103, 00:16:34.964 "qid": 0, 00:16:34.964 "state": "enabled", 00:16:34.964 "thread": "nvmf_tgt_poll_group_000", 00:16:34.964 "listen_address": { 00:16:34.965 "trtype": "TCP", 00:16:34.965 "adrfam": "IPv4", 00:16:34.965 "traddr": "10.0.0.2", 00:16:34.965 "trsvcid": "4420" 00:16:34.965 }, 00:16:34.965 "peer_address": { 00:16:34.965 "trtype": "TCP", 00:16:34.965 "adrfam": "IPv4", 00:16:34.965 "traddr": "10.0.0.1", 00:16:34.965 "trsvcid": "58320" 00:16:34.965 }, 00:16:34.965 "auth": { 00:16:34.965 "state": "completed", 00:16:34.965 "digest": "sha512", 00:16:34.965 "dhgroup": "null" 00:16:34.965 } 00:16:34.965 } 00:16:34.965 ]' 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.965 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.224 00:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.160 00:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.729 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.987 00:16:36.987 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.987 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.987 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.245 { 00:16:37.245 "cntlid": 105, 00:16:37.245 "qid": 0, 00:16:37.245 "state": "enabled", 00:16:37.245 "thread": "nvmf_tgt_poll_group_000", 00:16:37.245 "listen_address": { 00:16:37.245 "trtype": "TCP", 00:16:37.245 "adrfam": "IPv4", 00:16:37.245 "traddr": "10.0.0.2", 00:16:37.245 "trsvcid": "4420" 00:16:37.245 }, 00:16:37.245 "peer_address": { 00:16:37.245 "trtype": "TCP", 00:16:37.245 "adrfam": "IPv4", 00:16:37.245 "traddr": "10.0.0.1", 00:16:37.245 "trsvcid": "37940" 00:16:37.245 }, 00:16:37.245 "auth": { 00:16:37.245 "state": "completed", 00:16:37.245 "digest": "sha512", 00:16:37.245 "dhgroup": "ffdhe2048" 00:16:37.245 } 00:16:37.245 } 00:16:37.245 ]' 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.245 00:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.503 00:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:16:38.435 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.435 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.435 00:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.435 00:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.692 00:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.692 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.692 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:38.692 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.949 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.206 00:16:39.206 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.206 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.206 00:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.464 { 00:16:39.464 "cntlid": 107, 00:16:39.464 "qid": 0, 00:16:39.464 "state": "enabled", 00:16:39.464 "thread": "nvmf_tgt_poll_group_000", 00:16:39.464 "listen_address": { 00:16:39.464 "trtype": "TCP", 00:16:39.464 "adrfam": "IPv4", 00:16:39.464 "traddr": "10.0.0.2", 00:16:39.464 "trsvcid": "4420" 00:16:39.464 }, 00:16:39.464 "peer_address": { 00:16:39.464 "trtype": "TCP", 00:16:39.464 "adrfam": "IPv4", 00:16:39.464 "traddr": "10.0.0.1", 00:16:39.464 "trsvcid": "37964" 00:16:39.464 }, 00:16:39.464 "auth": { 00:16:39.464 "state": "completed", 00:16:39.464 "digest": "sha512", 00:16:39.464 "dhgroup": "ffdhe2048" 00:16:39.464 } 00:16:39.464 } 00:16:39.464 ]' 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.464 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.722 00:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:16:41.110 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.110 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.110 00:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.110 00:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.110 00:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.110 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.110 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.111 00:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.367 00:16:41.367 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.367 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.367 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.624 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.624 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.624 00:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.624 00:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.624 00:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.624 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.624 { 00:16:41.624 "cntlid": 109, 00:16:41.624 "qid": 0, 00:16:41.624 "state": "enabled", 00:16:41.624 "thread": "nvmf_tgt_poll_group_000", 00:16:41.624 "listen_address": { 00:16:41.624 "trtype": "TCP", 00:16:41.624 "adrfam": "IPv4", 00:16:41.624 "traddr": "10.0.0.2", 00:16:41.624 "trsvcid": "4420" 00:16:41.624 }, 00:16:41.624 "peer_address": { 00:16:41.624 "trtype": "TCP", 00:16:41.624 "adrfam": "IPv4", 00:16:41.624 "traddr": "10.0.0.1", 00:16:41.624 "trsvcid": "37980" 00:16:41.624 }, 00:16:41.624 "auth": { 00:16:41.624 "state": "completed", 00:16:41.624 "digest": "sha512", 00:16:41.624 "dhgroup": "ffdhe2048" 00:16:41.624 } 00:16:41.624 } 00:16:41.624 ]' 00:16:41.624 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.880 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.880 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.880 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.880 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.880 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.880 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.880 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.136 00:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.098 00:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.356 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.612 00:16:43.612 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.612 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.612 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.868 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.868 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.868 00:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.868 00:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.868 00:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.868 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.868 { 00:16:43.868 "cntlid": 111, 00:16:43.868 "qid": 0, 00:16:43.868 "state": "enabled", 00:16:43.868 "thread": "nvmf_tgt_poll_group_000", 00:16:43.868 "listen_address": { 00:16:43.868 "trtype": "TCP", 00:16:43.868 "adrfam": "IPv4", 00:16:43.868 "traddr": "10.0.0.2", 00:16:43.868 "trsvcid": "4420" 00:16:43.868 }, 00:16:43.868 "peer_address": { 00:16:43.868 "trtype": "TCP", 00:16:43.868 "adrfam": "IPv4", 00:16:43.868 "traddr": "10.0.0.1", 00:16:43.868 "trsvcid": "38018" 00:16:43.868 }, 00:16:43.868 "auth": { 00:16:43.868 "state": "completed", 00:16:43.868 "digest": "sha512", 00:16:43.868 "dhgroup": "ffdhe2048" 00:16:43.868 } 00:16:43.868 } 00:16:43.868 ]' 00:16:43.868 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.124 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.124 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.124 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.124 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.124 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.124 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.124 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.380 00:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.311 00:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.567 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.823 00:16:45.823 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.823 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.823 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.080 { 00:16:46.080 "cntlid": 113, 00:16:46.080 "qid": 0, 00:16:46.080 "state": "enabled", 00:16:46.080 "thread": "nvmf_tgt_poll_group_000", 00:16:46.080 "listen_address": { 00:16:46.080 "trtype": "TCP", 00:16:46.080 "adrfam": "IPv4", 00:16:46.080 "traddr": "10.0.0.2", 00:16:46.080 "trsvcid": "4420" 00:16:46.080 }, 00:16:46.080 "peer_address": { 00:16:46.080 "trtype": "TCP", 00:16:46.080 "adrfam": "IPv4", 00:16:46.080 "traddr": "10.0.0.1", 00:16:46.080 "trsvcid": "38056" 00:16:46.080 }, 00:16:46.080 "auth": { 00:16:46.080 "state": "completed", 00:16:46.080 "digest": "sha512", 00:16:46.080 "dhgroup": "ffdhe3072" 00:16:46.080 } 00:16:46.080 } 00:16:46.080 ]' 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.080 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.338 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.338 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.338 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.338 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.338 00:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.595 00:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.529 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.788 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.046 00:16:48.046 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.046 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.046 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.304 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.304 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.304 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.304 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.304 00:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.304 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.304 { 00:16:48.304 "cntlid": 115, 00:16:48.304 "qid": 0, 00:16:48.304 "state": "enabled", 00:16:48.304 "thread": "nvmf_tgt_poll_group_000", 00:16:48.304 "listen_address": { 00:16:48.304 "trtype": "TCP", 00:16:48.304 "adrfam": "IPv4", 00:16:48.304 "traddr": "10.0.0.2", 00:16:48.304 "trsvcid": "4420" 00:16:48.304 }, 00:16:48.304 "peer_address": { 00:16:48.304 "trtype": "TCP", 00:16:48.304 "adrfam": "IPv4", 00:16:48.304 "traddr": "10.0.0.1", 00:16:48.304 "trsvcid": "52020" 00:16:48.304 }, 00:16:48.304 "auth": { 00:16:48.304 "state": "completed", 00:16:48.304 "digest": "sha512", 00:16:48.304 "dhgroup": "ffdhe3072" 00:16:48.304 } 00:16:48.304 } 00:16:48.304 ]' 00:16:48.304 00:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.304 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.304 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.304 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.304 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.563 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.563 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.563 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.823 00:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.756 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.013 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.271 00:16:50.271 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.271 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.271 00:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.535 { 00:16:50.535 "cntlid": 117, 00:16:50.535 "qid": 0, 00:16:50.535 "state": "enabled", 00:16:50.535 "thread": "nvmf_tgt_poll_group_000", 00:16:50.535 "listen_address": { 00:16:50.535 "trtype": "TCP", 00:16:50.535 "adrfam": "IPv4", 00:16:50.535 "traddr": "10.0.0.2", 00:16:50.535 "trsvcid": "4420" 00:16:50.535 }, 00:16:50.535 "peer_address": { 00:16:50.535 "trtype": "TCP", 00:16:50.535 "adrfam": "IPv4", 00:16:50.535 "traddr": "10.0.0.1", 00:16:50.535 "trsvcid": "52052" 00:16:50.535 }, 00:16:50.535 "auth": { 00:16:50.535 "state": "completed", 00:16:50.535 "digest": "sha512", 00:16:50.535 "dhgroup": "ffdhe3072" 00:16:50.535 } 00:16:50.535 } 00:16:50.535 ]' 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.535 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.795 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.795 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.795 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.053 00:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.983 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.240 00:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.498 00:16:52.498 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.498 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.498 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.756 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.756 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.756 00:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.756 00:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.756 00:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.756 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.756 { 00:16:52.756 "cntlid": 119, 00:16:52.756 "qid": 0, 00:16:52.756 "state": "enabled", 00:16:52.756 "thread": "nvmf_tgt_poll_group_000", 00:16:52.756 "listen_address": { 00:16:52.756 "trtype": "TCP", 00:16:52.756 "adrfam": "IPv4", 00:16:52.756 "traddr": "10.0.0.2", 00:16:52.756 "trsvcid": "4420" 00:16:52.756 }, 00:16:52.756 "peer_address": { 00:16:52.756 "trtype": "TCP", 00:16:52.756 "adrfam": "IPv4", 00:16:52.756 "traddr": "10.0.0.1", 00:16:52.756 "trsvcid": "52086" 00:16:52.756 }, 00:16:52.756 "auth": { 00:16:52.756 "state": "completed", 00:16:52.756 "digest": "sha512", 00:16:52.756 "dhgroup": "ffdhe3072" 00:16:52.756 } 00:16:52.756 } 00:16:52.756 ]' 00:16:52.756 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.013 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.013 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.013 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.013 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.013 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.013 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.014 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.271 00:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.204 00:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.474 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.732 00:16:54.732 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.732 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.732 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.990 { 00:16:54.990 "cntlid": 121, 00:16:54.990 "qid": 0, 00:16:54.990 "state": "enabled", 00:16:54.990 "thread": "nvmf_tgt_poll_group_000", 00:16:54.990 "listen_address": { 00:16:54.990 "trtype": "TCP", 00:16:54.990 "adrfam": "IPv4", 00:16:54.990 "traddr": "10.0.0.2", 00:16:54.990 "trsvcid": "4420" 00:16:54.990 }, 00:16:54.990 "peer_address": { 00:16:54.990 "trtype": "TCP", 00:16:54.990 "adrfam": "IPv4", 00:16:54.990 "traddr": "10.0.0.1", 00:16:54.990 "trsvcid": "52118" 00:16:54.990 }, 00:16:54.990 "auth": { 00:16:54.990 "state": "completed", 00:16:54.990 "digest": "sha512", 00:16:54.990 "dhgroup": "ffdhe4096" 00:16:54.990 } 00:16:54.990 } 00:16:54.990 ]' 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.990 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.249 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.249 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.249 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.249 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.249 00:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.507 00:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.440 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.698 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.956 00:16:56.956 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.956 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.956 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.214 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.214 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.214 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.214 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.214 00:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.214 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.214 { 00:16:57.214 "cntlid": 123, 00:16:57.214 "qid": 0, 00:16:57.214 "state": "enabled", 00:16:57.214 "thread": "nvmf_tgt_poll_group_000", 00:16:57.214 "listen_address": { 00:16:57.214 "trtype": "TCP", 00:16:57.214 "adrfam": "IPv4", 00:16:57.214 "traddr": "10.0.0.2", 00:16:57.214 "trsvcid": "4420" 00:16:57.214 }, 00:16:57.214 "peer_address": { 00:16:57.214 "trtype": "TCP", 00:16:57.214 "adrfam": "IPv4", 00:16:57.214 "traddr": "10.0.0.1", 00:16:57.214 "trsvcid": "33334" 00:16:57.214 }, 00:16:57.214 "auth": { 00:16:57.214 "state": "completed", 00:16:57.214 "digest": "sha512", 00:16:57.214 "dhgroup": "ffdhe4096" 00:16:57.214 } 00:16:57.214 } 00:16:57.214 ]' 00:16:57.214 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.472 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.472 00:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.472 00:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.472 00:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.472 00:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.472 00:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.472 00:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.754 00:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.705 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.963 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.220 00:16:59.220 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.220 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.220 00:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.477 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.477 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.477 00:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.477 00:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.477 00:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.477 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.477 { 00:16:59.477 "cntlid": 125, 00:16:59.477 "qid": 0, 00:16:59.477 "state": "enabled", 00:16:59.477 "thread": "nvmf_tgt_poll_group_000", 00:16:59.477 "listen_address": { 00:16:59.477 "trtype": "TCP", 00:16:59.477 "adrfam": "IPv4", 00:16:59.477 "traddr": "10.0.0.2", 00:16:59.477 "trsvcid": "4420" 00:16:59.477 }, 00:16:59.477 "peer_address": { 00:16:59.477 "trtype": "TCP", 00:16:59.477 "adrfam": "IPv4", 00:16:59.477 "traddr": "10.0.0.1", 00:16:59.477 "trsvcid": "33356" 00:16:59.477 }, 00:16:59.477 "auth": { 00:16:59.477 "state": "completed", 00:16:59.477 "digest": "sha512", 00:16:59.477 "dhgroup": "ffdhe4096" 00:16:59.477 } 00:16:59.477 } 00:16:59.477 ]' 00:16:59.477 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.735 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.735 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.735 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.735 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.735 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.735 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.735 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.992 00:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:17:00.945 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.945 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.946 00:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.946 00:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.946 00:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.946 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.946 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.946 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.204 00:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.772 00:17:01.772 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.772 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.772 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.030 { 00:17:02.030 "cntlid": 127, 00:17:02.030 "qid": 0, 00:17:02.030 "state": "enabled", 00:17:02.030 "thread": "nvmf_tgt_poll_group_000", 00:17:02.030 "listen_address": { 00:17:02.030 "trtype": "TCP", 00:17:02.030 "adrfam": "IPv4", 00:17:02.030 "traddr": "10.0.0.2", 00:17:02.030 "trsvcid": "4420" 00:17:02.030 }, 00:17:02.030 "peer_address": { 00:17:02.030 "trtype": "TCP", 00:17:02.030 "adrfam": "IPv4", 00:17:02.030 "traddr": "10.0.0.1", 00:17:02.030 "trsvcid": "33396" 00:17:02.030 }, 00:17:02.030 "auth": { 00:17:02.030 "state": "completed", 00:17:02.030 "digest": "sha512", 00:17:02.030 "dhgroup": "ffdhe4096" 00:17:02.030 } 00:17:02.030 } 00:17:02.030 ]' 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.030 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.288 00:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.225 00:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.483 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.048 00:17:04.048 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.048 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.048 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.306 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.306 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.306 00:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.306 00:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.306 00:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.306 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.306 { 00:17:04.306 "cntlid": 129, 00:17:04.306 "qid": 0, 00:17:04.306 "state": "enabled", 00:17:04.306 "thread": "nvmf_tgt_poll_group_000", 00:17:04.306 "listen_address": { 00:17:04.306 "trtype": "TCP", 00:17:04.306 "adrfam": "IPv4", 00:17:04.306 "traddr": "10.0.0.2", 00:17:04.306 "trsvcid": "4420" 00:17:04.306 }, 00:17:04.306 "peer_address": { 00:17:04.306 "trtype": "TCP", 00:17:04.306 "adrfam": "IPv4", 00:17:04.306 "traddr": "10.0.0.1", 00:17:04.306 "trsvcid": "33424" 00:17:04.306 }, 00:17:04.306 "auth": { 00:17:04.306 "state": "completed", 00:17:04.306 "digest": "sha512", 00:17:04.306 "dhgroup": "ffdhe6144" 00:17:04.306 } 00:17:04.306 } 00:17:04.306 ]' 00:17:04.306 00:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.306 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.306 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.306 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.306 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.565 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.565 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.565 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.823 00:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.754 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.012 00:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.577 00:17:06.577 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.577 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.577 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.834 { 00:17:06.834 "cntlid": 131, 00:17:06.834 "qid": 0, 00:17:06.834 "state": "enabled", 00:17:06.834 "thread": "nvmf_tgt_poll_group_000", 00:17:06.834 "listen_address": { 00:17:06.834 "trtype": "TCP", 00:17:06.834 "adrfam": "IPv4", 00:17:06.834 "traddr": "10.0.0.2", 00:17:06.834 "trsvcid": "4420" 00:17:06.834 }, 00:17:06.834 "peer_address": { 00:17:06.834 "trtype": "TCP", 00:17:06.834 "adrfam": "IPv4", 00:17:06.834 "traddr": "10.0.0.1", 00:17:06.834 "trsvcid": "33446" 00:17:06.834 }, 00:17:06.834 "auth": { 00:17:06.834 "state": "completed", 00:17:06.834 "digest": "sha512", 00:17:06.834 "dhgroup": "ffdhe6144" 00:17:06.834 } 00:17:06.834 } 00:17:06.834 ]' 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.834 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.091 00:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.026 00:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.285 00:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.543 00:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.543 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.543 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.110 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.110 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.110 { 00:17:09.110 "cntlid": 133, 00:17:09.110 "qid": 0, 00:17:09.110 "state": "enabled", 00:17:09.110 "thread": "nvmf_tgt_poll_group_000", 00:17:09.110 "listen_address": { 00:17:09.110 "trtype": "TCP", 00:17:09.110 "adrfam": "IPv4", 00:17:09.110 "traddr": "10.0.0.2", 00:17:09.110 "trsvcid": "4420" 00:17:09.111 }, 00:17:09.111 "peer_address": { 00:17:09.111 "trtype": "TCP", 00:17:09.111 "adrfam": "IPv4", 00:17:09.111 "traddr": "10.0.0.1", 00:17:09.111 "trsvcid": "52038" 00:17:09.111 }, 00:17:09.111 "auth": { 00:17:09.111 "state": "completed", 00:17:09.111 "digest": "sha512", 00:17:09.111 "dhgroup": "ffdhe6144" 00:17:09.111 } 00:17:09.111 } 00:17:09.111 ]' 00:17:09.111 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.368 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.368 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.368 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.368 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.368 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.368 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.368 00:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.625 00:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:10.560 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.818 00:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.386 00:17:11.386 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.386 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.386 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.645 { 00:17:11.645 "cntlid": 135, 00:17:11.645 "qid": 0, 00:17:11.645 "state": "enabled", 00:17:11.645 "thread": "nvmf_tgt_poll_group_000", 00:17:11.645 "listen_address": { 00:17:11.645 "trtype": "TCP", 00:17:11.645 "adrfam": "IPv4", 00:17:11.645 "traddr": "10.0.0.2", 00:17:11.645 "trsvcid": "4420" 00:17:11.645 }, 00:17:11.645 "peer_address": { 00:17:11.645 "trtype": "TCP", 00:17:11.645 "adrfam": "IPv4", 00:17:11.645 "traddr": "10.0.0.1", 00:17:11.645 "trsvcid": "52062" 00:17:11.645 }, 00:17:11.645 "auth": { 00:17:11.645 "state": "completed", 00:17:11.645 "digest": "sha512", 00:17:11.645 "dhgroup": "ffdhe6144" 00:17:11.645 } 00:17:11.645 } 00:17:11.645 ]' 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.645 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.646 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.646 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.646 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.646 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.904 00:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.904 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.162 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.163 00:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.099 00:17:14.099 00:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.099 00:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.099 00:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.357 { 00:17:14.357 "cntlid": 137, 00:17:14.357 "qid": 0, 00:17:14.357 "state": "enabled", 00:17:14.357 "thread": "nvmf_tgt_poll_group_000", 00:17:14.357 "listen_address": { 00:17:14.357 "trtype": "TCP", 00:17:14.357 "adrfam": "IPv4", 00:17:14.357 "traddr": "10.0.0.2", 00:17:14.357 "trsvcid": "4420" 00:17:14.357 }, 00:17:14.357 "peer_address": { 00:17:14.357 "trtype": "TCP", 00:17:14.357 "adrfam": "IPv4", 00:17:14.357 "traddr": "10.0.0.1", 00:17:14.357 "trsvcid": "52098" 00:17:14.357 }, 00:17:14.357 "auth": { 00:17:14.357 "state": "completed", 00:17:14.357 "digest": "sha512", 00:17:14.357 "dhgroup": "ffdhe8192" 00:17:14.357 } 00:17:14.357 } 00:17:14.357 ]' 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.357 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.615 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.616 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.616 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.616 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.616 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.874 00:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:17:15.807 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.808 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.808 00:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.808 00:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.808 00:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.808 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.808 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.808 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.065 00:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.001 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.001 { 00:17:17.001 "cntlid": 139, 00:17:17.001 "qid": 0, 00:17:17.001 "state": "enabled", 00:17:17.001 "thread": "nvmf_tgt_poll_group_000", 00:17:17.001 "listen_address": { 00:17:17.001 "trtype": "TCP", 00:17:17.001 "adrfam": "IPv4", 00:17:17.001 "traddr": "10.0.0.2", 00:17:17.001 "trsvcid": "4420" 00:17:17.001 }, 00:17:17.001 "peer_address": { 00:17:17.001 "trtype": "TCP", 00:17:17.001 "adrfam": "IPv4", 00:17:17.001 "traddr": "10.0.0.1", 00:17:17.001 "trsvcid": "52138" 00:17:17.001 }, 00:17:17.001 "auth": { 00:17:17.001 "state": "completed", 00:17:17.001 "digest": "sha512", 00:17:17.001 "dhgroup": "ffdhe8192" 00:17:17.001 } 00:17:17.001 } 00:17:17.001 ]' 00:17:17.001 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.259 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.259 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.259 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.259 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.259 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.259 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.259 00:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.518 00:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDIzNWNmZGNkODBmOGM5MTM1NWEwNTc4NjIyNDA5NTmxCvf8: --dhchap-ctrl-secret DHHC-1:02:M2QxOWYzMDg1ZGExNDczYjRiMTYyZjMxMzNmZGE1OTQyMjk3OWE4OWEwNzhiNjdhGmrOGg==: 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.453 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.722 00:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.664 00:17:19.664 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.664 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.664 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.921 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.921 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.921 00:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.921 00:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.921 00:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.921 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.921 { 00:17:19.921 "cntlid": 141, 00:17:19.921 "qid": 0, 00:17:19.921 "state": "enabled", 00:17:19.921 "thread": "nvmf_tgt_poll_group_000", 00:17:19.921 "listen_address": { 00:17:19.921 "trtype": "TCP", 00:17:19.921 "adrfam": "IPv4", 00:17:19.921 "traddr": "10.0.0.2", 00:17:19.921 "trsvcid": "4420" 00:17:19.921 }, 00:17:19.921 "peer_address": { 00:17:19.921 "trtype": "TCP", 00:17:19.921 "adrfam": "IPv4", 00:17:19.922 "traddr": "10.0.0.1", 00:17:19.922 "trsvcid": "36078" 00:17:19.922 }, 00:17:19.922 "auth": { 00:17:19.922 "state": "completed", 00:17:19.922 "digest": "sha512", 00:17:19.922 "dhgroup": "ffdhe8192" 00:17:19.922 } 00:17:19.922 } 00:17:19.922 ]' 00:17:19.922 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.922 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.922 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.922 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.922 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.180 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.180 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.180 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.180 00:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzAwOGVjOTIzZWZiNmZjYTZlYjVkMzY4OTdkOGY1NDczMDQ2YTU0NjU3NmJiZDI5woUgqQ==: --dhchap-ctrl-secret DHHC-1:01:MGM4MTlhZmRlODQ1NTY5MDhhOTFhMTg0Yjg3Nzk4NDHrycQE: 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.558 00:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.558 00:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.498 00:17:22.498 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.498 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.498 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.756 { 00:17:22.756 "cntlid": 143, 00:17:22.756 "qid": 0, 00:17:22.756 "state": "enabled", 00:17:22.756 "thread": "nvmf_tgt_poll_group_000", 00:17:22.756 "listen_address": { 00:17:22.756 "trtype": "TCP", 00:17:22.756 "adrfam": "IPv4", 00:17:22.756 "traddr": "10.0.0.2", 00:17:22.756 "trsvcid": "4420" 00:17:22.756 }, 00:17:22.756 "peer_address": { 00:17:22.756 "trtype": "TCP", 00:17:22.756 "adrfam": "IPv4", 00:17:22.756 "traddr": "10.0.0.1", 00:17:22.756 "trsvcid": "36098" 00:17:22.756 }, 00:17:22.756 "auth": { 00:17:22.756 "state": "completed", 00:17:22.756 "digest": "sha512", 00:17:22.756 "dhgroup": "ffdhe8192" 00:17:22.756 } 00:17:22.756 } 00:17:22.756 ]' 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.756 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.016 00:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.403 00:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.403 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.338 00:17:25.338 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.338 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.338 00:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.597 { 00:17:25.597 "cntlid": 145, 00:17:25.597 "qid": 0, 00:17:25.597 "state": "enabled", 00:17:25.597 "thread": "nvmf_tgt_poll_group_000", 00:17:25.597 "listen_address": { 00:17:25.597 "trtype": "TCP", 00:17:25.597 "adrfam": "IPv4", 00:17:25.597 "traddr": "10.0.0.2", 00:17:25.597 "trsvcid": "4420" 00:17:25.597 }, 00:17:25.597 "peer_address": { 00:17:25.597 "trtype": "TCP", 00:17:25.597 "adrfam": "IPv4", 00:17:25.597 "traddr": "10.0.0.1", 00:17:25.597 "trsvcid": "36136" 00:17:25.597 }, 00:17:25.597 "auth": { 00:17:25.597 "state": "completed", 00:17:25.597 "digest": "sha512", 00:17:25.597 "dhgroup": "ffdhe8192" 00:17:25.597 } 00:17:25.597 } 00:17:25.597 ]' 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.597 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.163 00:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmEyZmUxNjYyNGQ1MjdiZjQ3ZDVlMDEyZWI5ZGYzYjgwYjg2MzU1OWU1YjAxZGU3X1GPxA==: --dhchap-ctrl-secret DHHC-1:03:MTljYzkxMTI1Zjc2ODcxYmQ1MjEzYjcyZTRkN2E5ZGRmMTczMjZlMTk4M2NlNDg1NDNjYmQ1ODdlZjRhMWZiONH/NNE=: 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.096 00:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:28.030 request: 00:17:28.030 { 00:17:28.030 "name": "nvme0", 00:17:28.030 "trtype": "tcp", 00:17:28.030 "traddr": "10.0.0.2", 00:17:28.030 "adrfam": "ipv4", 00:17:28.030 "trsvcid": "4420", 00:17:28.030 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:28.030 "prchk_reftag": false, 00:17:28.030 "prchk_guard": false, 00:17:28.030 "hdgst": false, 00:17:28.030 "ddgst": false, 00:17:28.030 "dhchap_key": "key2", 00:17:28.030 "method": "bdev_nvme_attach_controller", 00:17:28.030 "req_id": 1 00:17:28.030 } 00:17:28.030 Got JSON-RPC error response 00:17:28.030 response: 00:17:28.030 { 00:17:28.030 "code": -5, 00:17:28.030 "message": "Input/output error" 00:17:28.030 } 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.030 00:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.610 request: 00:17:28.610 { 00:17:28.610 "name": "nvme0", 00:17:28.610 "trtype": "tcp", 00:17:28.610 "traddr": "10.0.0.2", 00:17:28.610 "adrfam": "ipv4", 00:17:28.610 "trsvcid": "4420", 00:17:28.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:28.610 "prchk_reftag": false, 00:17:28.610 "prchk_guard": false, 00:17:28.610 "hdgst": false, 00:17:28.610 "ddgst": false, 00:17:28.610 "dhchap_key": "key1", 00:17:28.610 "dhchap_ctrlr_key": "ckey2", 00:17:28.610 "method": "bdev_nvme_attach_controller", 00:17:28.610 "req_id": 1 00:17:28.610 } 00:17:28.610 Got JSON-RPC error response 00:17:28.610 response: 00:17:28.610 { 00:17:28.610 "code": -5, 00:17:28.610 "message": "Input/output error" 00:17:28.610 } 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.610 00:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.547 request: 00:17:29.547 { 00:17:29.547 "name": "nvme0", 00:17:29.547 "trtype": "tcp", 00:17:29.547 "traddr": "10.0.0.2", 00:17:29.547 "adrfam": "ipv4", 00:17:29.547 "trsvcid": "4420", 00:17:29.547 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:29.547 "prchk_reftag": false, 00:17:29.547 "prchk_guard": false, 00:17:29.547 "hdgst": false, 00:17:29.547 "ddgst": false, 00:17:29.547 "dhchap_key": "key1", 00:17:29.547 "dhchap_ctrlr_key": "ckey1", 00:17:29.547 "method": "bdev_nvme_attach_controller", 00:17:29.547 "req_id": 1 00:17:29.547 } 00:17:29.547 Got JSON-RPC error response 00:17:29.547 response: 00:17:29.547 { 00:17:29.547 "code": -5, 00:17:29.547 "message": "Input/output error" 00:17:29.547 } 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2633821 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2633821 ']' 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2633821 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2633821 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2633821' 00:17:29.547 killing process with pid 2633821 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2633821 00:17:29.547 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2633821 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2656584 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2656584 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2656584 ']' 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.804 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.805 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.805 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2656584 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2656584 ']' 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.062 00:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.321 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:30.321 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:30.321 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.321 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.578 00:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.511 00:17:31.511 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.511 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.511 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.768 { 00:17:31.768 "cntlid": 1, 00:17:31.768 "qid": 0, 00:17:31.768 "state": "enabled", 00:17:31.768 "thread": "nvmf_tgt_poll_group_000", 00:17:31.768 "listen_address": { 00:17:31.768 "trtype": "TCP", 00:17:31.768 "adrfam": "IPv4", 00:17:31.768 "traddr": "10.0.0.2", 00:17:31.768 "trsvcid": "4420" 00:17:31.768 }, 00:17:31.768 "peer_address": { 00:17:31.768 "trtype": "TCP", 00:17:31.768 "adrfam": "IPv4", 00:17:31.768 "traddr": "10.0.0.1", 00:17:31.768 "trsvcid": "55552" 00:17:31.768 }, 00:17:31.768 "auth": { 00:17:31.768 "state": "completed", 00:17:31.768 "digest": "sha512", 00:17:31.768 "dhgroup": "ffdhe8192" 00:17:31.768 } 00:17:31.768 } 00:17:31.768 ]' 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.768 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.769 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.769 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.769 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.026 00:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTA3MWEzM2JkNTY4ZGI2YzhlNTBhMjg0NzUxMjM3MmQ5NjYyMzgzZTBlZGI5Y2YzYTBjMDQyZDMyNWViNjFkZlZNQEs=: 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:32.962 00:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.528 00:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.528 request: 00:17:33.528 { 00:17:33.528 "name": "nvme0", 00:17:33.528 "trtype": "tcp", 00:17:33.528 "traddr": "10.0.0.2", 00:17:33.528 "adrfam": "ipv4", 00:17:33.528 "trsvcid": "4420", 00:17:33.528 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:33.528 "prchk_reftag": false, 00:17:33.528 "prchk_guard": false, 00:17:33.528 "hdgst": false, 00:17:33.528 "ddgst": false, 00:17:33.528 "dhchap_key": "key3", 00:17:33.528 "method": "bdev_nvme_attach_controller", 00:17:33.528 "req_id": 1 00:17:33.528 } 00:17:33.528 Got JSON-RPC error response 00:17:33.528 response: 00:17:33.528 { 00:17:33.528 "code": -5, 00:17:33.528 "message": "Input/output error" 00:17:33.528 } 00:17:33.528 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:33.528 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:33.528 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:33.528 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:33.528 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:33.528 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:33.528 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:33.529 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.786 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.044 request: 00:17:34.044 { 00:17:34.044 "name": "nvme0", 00:17:34.044 "trtype": "tcp", 00:17:34.044 "traddr": "10.0.0.2", 00:17:34.044 "adrfam": "ipv4", 00:17:34.044 "trsvcid": "4420", 00:17:34.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:34.044 "prchk_reftag": false, 00:17:34.044 "prchk_guard": false, 00:17:34.044 "hdgst": false, 00:17:34.044 "ddgst": false, 00:17:34.044 "dhchap_key": "key3", 00:17:34.044 "method": "bdev_nvme_attach_controller", 00:17:34.044 "req_id": 1 00:17:34.044 } 00:17:34.044 Got JSON-RPC error response 00:17:34.044 response: 00:17:34.044 { 00:17:34.044 "code": -5, 00:17:34.044 "message": "Input/output error" 00:17:34.044 } 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.301 00:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.558 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.816 request: 00:17:34.816 { 00:17:34.816 "name": "nvme0", 00:17:34.816 "trtype": "tcp", 00:17:34.816 "traddr": "10.0.0.2", 00:17:34.816 "adrfam": "ipv4", 00:17:34.816 "trsvcid": "4420", 00:17:34.816 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:34.816 "prchk_reftag": false, 00:17:34.816 "prchk_guard": false, 00:17:34.816 "hdgst": false, 00:17:34.817 "ddgst": false, 00:17:34.817 "dhchap_key": "key0", 00:17:34.817 "dhchap_ctrlr_key": "key1", 00:17:34.817 "method": "bdev_nvme_attach_controller", 00:17:34.817 "req_id": 1 00:17:34.817 } 00:17:34.817 Got JSON-RPC error response 00:17:34.817 response: 00:17:34.817 { 00:17:34.817 "code": -5, 00:17:34.817 "message": "Input/output error" 00:17:34.817 } 00:17:34.817 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:34.817 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.817 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.817 00:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.817 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:34.817 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:35.075 00:17:35.075 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:35.075 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:35.075 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.333 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.333 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.333 00:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2633970 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2633970 ']' 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2633970 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2633970 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2633970' 00:17:35.591 killing process with pid 2633970 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2633970 00:17:35.591 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2633970 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:36.155 rmmod nvme_tcp 00:17:36.155 rmmod nvme_fabrics 00:17:36.155 rmmod nvme_keyring 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2656584 ']' 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2656584 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2656584 ']' 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2656584 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2656584 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2656584' 00:17:36.155 killing process with pid 2656584 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2656584 00:17:36.155 00:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2656584 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.413 00:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.317 00:54:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:38.317 00:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.bvC /tmp/spdk.key-sha256.ba3 /tmp/spdk.key-sha384.1V3 /tmp/spdk.key-sha512.1Ik /tmp/spdk.key-sha512.Oak /tmp/spdk.key-sha384.7Qu /tmp/spdk.key-sha256.rLY '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:38.317 00:17:38.317 real 3m10.811s 00:17:38.317 user 7m23.470s 00:17:38.317 sys 0m25.100s 00:17:38.317 00:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:38.317 00:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.317 ************************************ 00:17:38.317 END TEST nvmf_auth_target 00:17:38.317 ************************************ 00:17:38.576 00:54:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:38.576 00:54:13 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:38.576 00:54:13 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:38.576 00:54:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:38.576 00:54:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.576 00:54:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.576 ************************************ 00:17:38.576 START TEST nvmf_bdevio_no_huge 00:17:38.576 ************************************ 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:38.576 * Looking for test storage... 00:17:38.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:38.576 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:38.577 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.577 00:54:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:40.479 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:40.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:40.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:40.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:40.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:17:40.480 00:17:40.480 --- 10.0.0.2 ping statistics --- 00:17:40.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.480 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:17:40.480 00:17:40.480 --- 10.0.0.1 ping statistics --- 00:17:40.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.480 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.480 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2659846 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2659846 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2659846 ']' 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.739 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.739 [2024-07-16 00:54:15.297434] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:17:40.739 [2024-07-16 00:54:15.297532] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:40.739 [2024-07-16 00:54:15.372023] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.739 [2024-07-16 00:54:15.480896] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.739 [2024-07-16 00:54:15.480966] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.739 [2024-07-16 00:54:15.480980] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.739 [2024-07-16 00:54:15.480991] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.739 [2024-07-16 00:54:15.481000] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.739 [2024-07-16 00:54:15.481069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:40.739 [2024-07-16 00:54:15.481128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:40.739 [2024-07-16 00:54:15.481193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:40.739 [2024-07-16 00:54:15.481200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.998 [2024-07-16 00:54:15.601297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.998 Malloc0 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.998 [2024-07-16 00:54:15.638140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.998 { 00:17:40.998 "params": { 00:17:40.998 "name": "Nvme$subsystem", 00:17:40.998 "trtype": "$TEST_TRANSPORT", 00:17:40.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.998 "adrfam": "ipv4", 00:17:40.998 "trsvcid": "$NVMF_PORT", 00:17:40.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.998 "hdgst": ${hdgst:-false}, 00:17:40.998 "ddgst": ${ddgst:-false} 00:17:40.998 }, 00:17:40.998 "method": "bdev_nvme_attach_controller" 00:17:40.998 } 00:17:40.998 EOF 00:17:40.998 )") 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:40.998 00:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.998 "params": { 00:17:40.998 "name": "Nvme1", 00:17:40.998 "trtype": "tcp", 00:17:40.998 "traddr": "10.0.0.2", 00:17:40.998 "adrfam": "ipv4", 00:17:40.998 "trsvcid": "4420", 00:17:40.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.998 "hdgst": false, 00:17:40.998 "ddgst": false 00:17:40.998 }, 00:17:40.998 "method": "bdev_nvme_attach_controller" 00:17:40.998 }' 00:17:40.998 [2024-07-16 00:54:15.683670] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:17:40.998 [2024-07-16 00:54:15.683747] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2659881 ] 00:17:40.998 [2024-07-16 00:54:15.746585] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:41.256 [2024-07-16 00:54:15.858028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.256 [2024-07-16 00:54:15.858077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.256 [2024-07-16 00:54:15.858081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.514 I/O targets: 00:17:41.514 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:41.514 00:17:41.514 00:17:41.514 CUnit - A unit testing framework for C - Version 2.1-3 00:17:41.514 http://cunit.sourceforge.net/ 00:17:41.514 00:17:41.514 00:17:41.514 Suite: bdevio tests on: Nvme1n1 00:17:41.514 Test: blockdev write read block ...passed 00:17:41.514 Test: blockdev write zeroes read block ...passed 00:17:41.514 Test: blockdev write zeroes read no split ...passed 00:17:41.514 Test: blockdev write zeroes read split ...passed 00:17:41.514 Test: blockdev write zeroes read split partial ...passed 00:17:41.514 Test: blockdev reset ...[2024-07-16 00:54:16.270426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.514 [2024-07-16 00:54:16.270546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6100 (9): Bad file descriptor 00:17:41.772 [2024-07-16 00:54:16.374134] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:41.772 passed 00:17:41.772 Test: blockdev write read 8 blocks ...passed 00:17:41.772 Test: blockdev write read size > 128k ...passed 00:17:41.772 Test: blockdev write read invalid size ...passed 00:17:41.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:41.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:41.772 Test: blockdev write read max offset ...passed 00:17:41.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:41.772 Test: blockdev writev readv 8 blocks ...passed 00:17:41.772 Test: blockdev writev readv 30 x 1block ...passed 00:17:42.030 Test: blockdev writev readv block ...passed 00:17:42.030 Test: blockdev writev readv size > 128k ...passed 00:17:42.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:42.030 Test: blockdev comparev and writev ...[2024-07-16 00:54:16.552779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.552815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.552840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.552858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.553289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.553314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.553336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.553352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.553756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.553780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.553801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.553817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.554235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.554259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.554281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.030 [2024-07-16 00:54:16.554297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:42.030 passed 00:17:42.030 Test: blockdev nvme passthru rw ...passed 00:17:42.030 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:54:16.638293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.030 [2024-07-16 00:54:16.638325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.638536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.030 [2024-07-16 00:54:16.638560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.638769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.030 [2024-07-16 00:54:16.638792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:42.030 [2024-07-16 00:54:16.639008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.030 [2024-07-16 00:54:16.639032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:42.030 passed 00:17:42.030 Test: blockdev nvme admin passthru ...passed 00:17:42.030 Test: blockdev copy ...passed 00:17:42.030 00:17:42.030 Run Summary: Type Total Ran Passed Failed Inactive 00:17:42.030 suites 1 1 n/a 0 0 00:17:42.030 tests 23 23 23 0 0 00:17:42.030 asserts 152 152 152 0 n/a 00:17:42.030 00:17:42.030 Elapsed time = 1.273 seconds 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.597 rmmod nvme_tcp 00:17:42.597 rmmod nvme_fabrics 00:17:42.597 rmmod nvme_keyring 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2659846 ']' 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2659846 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2659846 ']' 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2659846 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2659846 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2659846' 00:17:42.597 killing process with pid 2659846 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2659846 00:17:42.597 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2659846 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.856 00:54:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.428 00:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:45.428 00:17:45.428 real 0m6.497s 00:17:45.428 user 0m10.921s 00:17:45.428 sys 0m2.423s 00:17:45.428 00:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.428 00:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:45.428 ************************************ 00:17:45.428 END TEST nvmf_bdevio_no_huge 00:17:45.428 ************************************ 00:17:45.428 00:54:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:45.428 00:54:19 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.428 00:54:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:45.428 00:54:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.428 00:54:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.428 ************************************ 00:17:45.428 START TEST nvmf_tls 00:17:45.428 ************************************ 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.428 * Looking for test storage... 00:17:45.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.428 00:54:19 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.429 00:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.328 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:47.329 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:47.329 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:47.329 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:47.329 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:47.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:17:47.329 00:17:47.329 --- 10.0.0.2 ping statistics --- 00:17:47.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.329 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:17:47.329 00:17:47.329 --- 10.0.0.1 ping statistics --- 00:17:47.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.329 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2662067 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2662067 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2662067 ']' 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.329 00:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.329 [2024-07-16 00:54:22.003296] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:17:47.329 [2024-07-16 00:54:22.003381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.329 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.329 [2024-07-16 00:54:22.073182] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.586 [2024-07-16 00:54:22.188838] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.586 [2024-07-16 00:54:22.188905] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.586 [2024-07-16 00:54:22.188943] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.586 [2024-07-16 00:54:22.188961] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.586 [2024-07-16 00:54:22.188977] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.586 [2024-07-16 00:54:22.189010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:48.516 00:54:22 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:48.516 true 00:17:48.516 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.516 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:48.773 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:48.773 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:48.773 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:49.029 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.029 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:49.286 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:49.286 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:49.286 00:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:49.542 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.542 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:49.800 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:49.800 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:49.800 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.800 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:50.058 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:50.058 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:50.058 00:54:24 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:50.315 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.315 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:50.573 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:50.573 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:50.573 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:50.830 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.830 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.a1P3JgukTd 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:51.088 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.28pvLFjeNL 00:17:51.346 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:51.346 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:51.346 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.a1P3JgukTd 00:17:51.346 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.28pvLFjeNL 00:17:51.346 00:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:51.346 00:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:51.912 00:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.a1P3JgukTd 00:17:51.912 00:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.a1P3JgukTd 00:17:51.912 00:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:52.168 [2024-07-16 00:54:26.711372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.169 00:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:52.426 00:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:52.684 [2024-07-16 00:54:27.300971] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.684 [2024-07-16 00:54:27.301255] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.684 00:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:52.942 malloc0 00:17:52.942 00:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.200 00:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a1P3JgukTd 00:17:53.458 [2024-07-16 00:54:28.030481] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:53.458 00:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.a1P3JgukTd 00:17:53.458 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.424 Initializing NVMe Controllers 00:18:03.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:03.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:03.424 Initialization complete. Launching workers. 00:18:03.424 ======================================================== 00:18:03.424 Latency(us) 00:18:03.424 Device Information : IOPS MiB/s Average min max 00:18:03.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7319.18 28.59 8747.18 1283.29 9413.89 00:18:03.424 ======================================================== 00:18:03.424 Total : 7319.18 28.59 8747.18 1283.29 9413.89 00:18:03.424 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a1P3JgukTd 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a1P3JgukTd' 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2663967 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2663967 /var/tmp/bdevperf.sock 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2663967 ']' 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.424 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.682 [2024-07-16 00:54:38.206951] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:03.682 [2024-07-16 00:54:38.207043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663967 ] 00:18:03.682 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.682 [2024-07-16 00:54:38.273327] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.682 [2024-07-16 00:54:38.389724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.939 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.939 00:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:03.939 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a1P3JgukTd 00:18:04.195 [2024-07-16 00:54:38.778092] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.195 [2024-07-16 00:54:38.778219] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:04.195 TLSTESTn1 00:18:04.195 00:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:04.452 Running I/O for 10 seconds... 00:18:14.471 00:18:14.471 Latency(us) 00:18:14.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.471 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:14.471 Verification LBA range: start 0x0 length 0x2000 00:18:14.471 TLSTESTn1 : 10.06 1811.87 7.08 0.00 0.00 70438.71 5898.24 102527.43 00:18:14.471 =================================================================================================================== 00:18:14.471 Total : 1811.87 7.08 0.00 0.00 70438.71 5898.24 102527.43 00:18:14.471 0 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2663967 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2663967 ']' 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2663967 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2663967 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2663967' 00:18:14.471 killing process with pid 2663967 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2663967 00:18:14.471 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.471 00:18:14.471 Latency(us) 00:18:14.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.471 =================================================================================================================== 00:18:14.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.471 [2024-07-16 00:54:49.116814] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:14.471 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2663967 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.28pvLFjeNL 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.28pvLFjeNL 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.28pvLFjeNL 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.28pvLFjeNL' 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2665282 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2665282 /var/tmp/bdevperf.sock 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2665282 ']' 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.729 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.730 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.730 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.730 [2024-07-16 00:54:49.431892] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:14.730 [2024-07-16 00:54:49.431970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665282 ] 00:18:14.730 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.988 [2024-07-16 00:54:49.489359] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.988 [2024-07-16 00:54:49.592105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.988 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.988 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.988 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.28pvLFjeNL 00:18:15.246 [2024-07-16 00:54:49.922914] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.246 [2024-07-16 00:54:49.923020] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.246 [2024-07-16 00:54:49.931431] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.246 [2024-07-16 00:54:49.932344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc9150 (107): Transport endpoint is not connected 00:18:15.246 [2024-07-16 00:54:49.933337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc9150 (9): Bad file descriptor 00:18:15.246 [2024-07-16 00:54:49.934336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.246 [2024-07-16 00:54:49.934354] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.246 [2024-07-16 00:54:49.934366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.246 request: 00:18:15.246 { 00:18:15.246 "name": "TLSTEST", 00:18:15.246 "trtype": "tcp", 00:18:15.246 "traddr": "10.0.0.2", 00:18:15.246 "adrfam": "ipv4", 00:18:15.246 "trsvcid": "4420", 00:18:15.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.246 "prchk_reftag": false, 00:18:15.246 "prchk_guard": false, 00:18:15.246 "hdgst": false, 00:18:15.246 "ddgst": false, 00:18:15.246 "psk": "/tmp/tmp.28pvLFjeNL", 00:18:15.246 "method": "bdev_nvme_attach_controller", 00:18:15.246 "req_id": 1 00:18:15.246 } 00:18:15.246 Got JSON-RPC error response 00:18:15.246 response: 00:18:15.246 { 00:18:15.246 "code": -5, 00:18:15.246 "message": "Input/output error" 00:18:15.246 } 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2665282 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2665282 ']' 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2665282 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2665282 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2665282' 00:18:15.246 killing process with pid 2665282 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2665282 00:18:15.246 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.246 00:18:15.246 Latency(us) 00:18:15.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.246 =================================================================================================================== 00:18:15.246 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.246 [2024-07-16 00:54:49.974310] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.246 00:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2665282 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a1P3JgukTd 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a1P3JgukTd 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a1P3JgukTd 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a1P3JgukTd' 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2665346 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2665346 /var/tmp/bdevperf.sock 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2665346 ']' 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.504 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.504 [2024-07-16 00:54:50.257285] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:15.504 [2024-07-16 00:54:50.257361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665346 ] 00:18:15.763 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.763 [2024-07-16 00:54:50.319543] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.763 [2024-07-16 00:54:50.427236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.020 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.020 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.020 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.a1P3JgukTd 00:18:16.020 [2024-07-16 00:54:50.764537] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.020 [2024-07-16 00:54:50.764639] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:16.020 [2024-07-16 00:54:50.773542] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:16.021 [2024-07-16 00:54:50.773574] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:16.021 [2024-07-16 00:54:50.773639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:16.021 [2024-07-16 00:54:50.773835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2161150 (107): Transport endpoint is not connected 00:18:16.021 [2024-07-16 00:54:50.774822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2161150 (9): Bad file descriptor 00:18:16.021 [2024-07-16 00:54:50.775821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:16.021 [2024-07-16 00:54:50.775840] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:16.021 [2024-07-16 00:54:50.775867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.021 request: 00:18:16.021 { 00:18:16.021 "name": "TLSTEST", 00:18:16.021 "trtype": "tcp", 00:18:16.021 "traddr": "10.0.0.2", 00:18:16.021 "adrfam": "ipv4", 00:18:16.021 "trsvcid": "4420", 00:18:16.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.021 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:16.021 "prchk_reftag": false, 00:18:16.021 "prchk_guard": false, 00:18:16.021 "hdgst": false, 00:18:16.021 "ddgst": false, 00:18:16.021 "psk": "/tmp/tmp.a1P3JgukTd", 00:18:16.021 "method": "bdev_nvme_attach_controller", 00:18:16.021 "req_id": 1 00:18:16.021 } 00:18:16.021 Got JSON-RPC error response 00:18:16.021 response: 00:18:16.021 { 00:18:16.021 "code": -5, 00:18:16.021 "message": "Input/output error" 00:18:16.021 } 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2665346 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2665346 ']' 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2665346 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2665346 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2665346' 00:18:16.279 killing process with pid 2665346 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2665346 00:18:16.279 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.279 00:18:16.279 Latency(us) 00:18:16.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.279 =================================================================================================================== 00:18:16.279 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.279 [2024-07-16 00:54:50.823210] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:16.279 00:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2665346 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a1P3JgukTd 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a1P3JgukTd 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a1P3JgukTd 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a1P3JgukTd' 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2665444 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2665444 /var/tmp/bdevperf.sock 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2665444 ']' 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.537 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.537 [2024-07-16 00:54:51.124840] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:16.537 [2024-07-16 00:54:51.124926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665444 ] 00:18:16.537 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.537 [2024-07-16 00:54:51.181606] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.537 [2024-07-16 00:54:51.284213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.795 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.795 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.795 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a1P3JgukTd 00:18:17.054 [2024-07-16 00:54:51.619675] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.054 [2024-07-16 00:54:51.619779] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:17.054 [2024-07-16 00:54:51.625752] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:17.054 [2024-07-16 00:54:51.625798] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:17.054 [2024-07-16 00:54:51.625861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.054 [2024-07-16 00:54:51.627017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036150 (107): Transport endpoint is not connected 00:18:17.054 [2024-07-16 00:54:51.628009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036150 (9): Bad file descriptor 00:18:17.054 [2024-07-16 00:54:51.629009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:17.054 [2024-07-16 00:54:51.629027] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.054 [2024-07-16 00:54:51.629040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:17.054 request: 00:18:17.054 { 00:18:17.054 "name": "TLSTEST", 00:18:17.054 "trtype": "tcp", 00:18:17.054 "traddr": "10.0.0.2", 00:18:17.054 "adrfam": "ipv4", 00:18:17.054 "trsvcid": "4420", 00:18:17.054 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:17.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.054 "prchk_reftag": false, 00:18:17.054 "prchk_guard": false, 00:18:17.054 "hdgst": false, 00:18:17.054 "ddgst": false, 00:18:17.054 "psk": "/tmp/tmp.a1P3JgukTd", 00:18:17.054 "method": "bdev_nvme_attach_controller", 00:18:17.054 "req_id": 1 00:18:17.054 } 00:18:17.054 Got JSON-RPC error response 00:18:17.054 response: 00:18:17.054 { 00:18:17.054 "code": -5, 00:18:17.054 "message": "Input/output error" 00:18:17.054 } 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2665444 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2665444 ']' 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2665444 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2665444 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2665444' 00:18:17.054 killing process with pid 2665444 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2665444 00:18:17.054 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.054 00:18:17.054 Latency(us) 00:18:17.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.054 =================================================================================================================== 00:18:17.054 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.054 [2024-07-16 00:54:51.671464] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.054 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2665444 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2665581 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2665581 /var/tmp/bdevperf.sock 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2665581 ']' 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.313 00:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.313 [2024-07-16 00:54:51.949378] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:17.313 [2024-07-16 00:54:51.949451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665581 ] 00:18:17.313 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.313 [2024-07-16 00:54:52.008092] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.571 [2024-07-16 00:54:52.118791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.571 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.571 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.571 00:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:17.829 [2024-07-16 00:54:52.459116] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.829 [2024-07-16 00:54:52.461000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249d910 (9): Bad file descriptor 00:18:17.829 [2024-07-16 00:54:52.461999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.829 [2024-07-16 00:54:52.462019] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.829 [2024-07-16 00:54:52.462040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.829 request: 00:18:17.829 { 00:18:17.829 "name": "TLSTEST", 00:18:17.829 "trtype": "tcp", 00:18:17.829 "traddr": "10.0.0.2", 00:18:17.829 "adrfam": "ipv4", 00:18:17.829 "trsvcid": "4420", 00:18:17.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.829 "prchk_reftag": false, 00:18:17.829 "prchk_guard": false, 00:18:17.829 "hdgst": false, 00:18:17.829 "ddgst": false, 00:18:17.829 "method": "bdev_nvme_attach_controller", 00:18:17.829 "req_id": 1 00:18:17.829 } 00:18:17.829 Got JSON-RPC error response 00:18:17.829 response: 00:18:17.829 { 00:18:17.829 "code": -5, 00:18:17.829 "message": "Input/output error" 00:18:17.829 } 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2665581 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2665581 ']' 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2665581 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2665581 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2665581' 00:18:17.829 killing process with pid 2665581 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2665581 00:18:17.829 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.829 00:18:17.829 Latency(us) 00:18:17.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.829 =================================================================================================================== 00:18:17.829 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.829 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2665581 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2662067 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2662067 ']' 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2662067 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2662067 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2662067' 00:18:18.088 killing process with pid 2662067 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2662067 00:18:18.088 [2024-07-16 00:54:52.793474] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:18.088 00:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2662067 00:18:18.346 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:18.346 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:18.347 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.347 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:18.347 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:18.347 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:18.347 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IScbN68pi4 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IScbN68pi4 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2665731 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2665731 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2665731 ']' 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.605 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.605 [2024-07-16 00:54:53.198239] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:18.605 [2024-07-16 00:54:53.198319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.605 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.605 [2024-07-16 00:54:53.262166] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.863 [2024-07-16 00:54:53.369356] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.863 [2024-07-16 00:54:53.369401] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.863 [2024-07-16 00:54:53.369423] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.863 [2024-07-16 00:54:53.369434] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.863 [2024-07-16 00:54:53.369458] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.863 [2024-07-16 00:54:53.369484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IScbN68pi4 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IScbN68pi4 00:18:18.863 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.120 [2024-07-16 00:54:53.791192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.120 00:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:19.377 00:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:19.635 [2024-07-16 00:54:54.372733] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.635 [2024-07-16 00:54:54.372997] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.892 00:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:20.150 malloc0 00:18:20.150 00:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.408 00:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IScbN68pi4 00:18:20.666 [2024-07-16 00:54:55.178845] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IScbN68pi4 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IScbN68pi4' 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2666013 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2666013 /var/tmp/bdevperf.sock 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2666013 ']' 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.666 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.666 [2024-07-16 00:54:55.242873] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:20.666 [2024-07-16 00:54:55.242969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666013 ] 00:18:20.666 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.666 [2024-07-16 00:54:55.304100] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.666 [2024-07-16 00:54:55.412861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.924 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.924 00:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.924 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IScbN68pi4 00:18:21.181 [2024-07-16 00:54:55.744330] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.181 [2024-07-16 00:54:55.744445] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:21.181 TLSTESTn1 00:18:21.181 00:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:21.439 Running I/O for 10 seconds... 00:18:31.398 00:18:31.398 Latency(us) 00:18:31.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.398 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.398 Verification LBA range: start 0x0 length 0x2000 00:18:31.398 TLSTESTn1 : 10.06 1824.49 7.13 0.00 0.00 69957.49 6043.88 107187.77 00:18:31.398 =================================================================================================================== 00:18:31.398 Total : 1824.49 7.13 0.00 0.00 69957.49 6043.88 107187.77 00:18:31.398 0 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2666013 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2666013 ']' 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2666013 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666013 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666013' 00:18:31.398 killing process with pid 2666013 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2666013 00:18:31.398 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.398 00:18:31.398 Latency(us) 00:18:31.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.398 =================================================================================================================== 00:18:31.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.398 [2024-07-16 00:55:06.071994] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:31.398 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2666013 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IScbN68pi4 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IScbN68pi4 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IScbN68pi4 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IScbN68pi4 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IScbN68pi4' 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2667343 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2667343 /var/tmp/bdevperf.sock 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2667343 ']' 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.654 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.654 [2024-07-16 00:55:06.385473] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:31.654 [2024-07-16 00:55:06.385564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667343 ] 00:18:31.654 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.910 [2024-07-16 00:55:06.443254] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.910 [2024-07-16 00:55:06.545647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.910 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.911 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:31.911 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IScbN68pi4 00:18:32.166 [2024-07-16 00:55:06.895399] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.166 [2024-07-16 00:55:06.895480] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:32.166 [2024-07-16 00:55:06.895494] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IScbN68pi4 00:18:32.166 request: 00:18:32.166 { 00:18:32.166 "name": "TLSTEST", 00:18:32.166 "trtype": "tcp", 00:18:32.166 "traddr": "10.0.0.2", 00:18:32.166 "adrfam": "ipv4", 00:18:32.167 "trsvcid": "4420", 00:18:32.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.167 "prchk_reftag": false, 00:18:32.167 "prchk_guard": false, 00:18:32.167 "hdgst": false, 00:18:32.167 "ddgst": false, 00:18:32.167 "psk": "/tmp/tmp.IScbN68pi4", 00:18:32.167 "method": "bdev_nvme_attach_controller", 00:18:32.167 "req_id": 1 00:18:32.167 } 00:18:32.167 Got JSON-RPC error response 00:18:32.167 response: 00:18:32.167 { 00:18:32.167 "code": -1, 00:18:32.167 "message": "Operation not permitted" 00:18:32.167 } 00:18:32.167 00:55:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2667343 00:18:32.167 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2667343 ']' 00:18:32.167 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2667343 00:18:32.167 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.167 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.167 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667343 00:18:32.424 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:32.424 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:32.424 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667343' 00:18:32.424 killing process with pid 2667343 00:18:32.424 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2667343 00:18:32.424 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.424 00:18:32.424 Latency(us) 00:18:32.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.424 =================================================================================================================== 00:18:32.424 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.424 00:55:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2667343 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2665731 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2665731 ']' 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2665731 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.424 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.681 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2665731 00:18:32.681 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.681 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.681 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2665731' 00:18:32.681 killing process with pid 2665731 00:18:32.681 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2665731 00:18:32.681 [2024-07-16 00:55:07.205192] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:32.681 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2665731 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2667488 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2667488 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2667488 ']' 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.939 00:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.939 [2024-07-16 00:55:07.537668] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:32.939 [2024-07-16 00:55:07.537743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.939 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.939 [2024-07-16 00:55:07.604670] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.197 [2024-07-16 00:55:07.719100] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.197 [2024-07-16 00:55:07.719153] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.197 [2024-07-16 00:55:07.719178] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.197 [2024-07-16 00:55:07.719191] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.197 [2024-07-16 00:55:07.719203] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.197 [2024-07-16 00:55:07.719233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.762 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.762 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.762 00:55:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.762 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:33.762 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IScbN68pi4 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IScbN68pi4 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:34.019 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.020 00:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.IScbN68pi4 00:18:34.020 00:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IScbN68pi4 00:18:34.020 00:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:34.279 [2024-07-16 00:55:08.803253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.279 00:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.579 00:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.837 [2024-07-16 00:55:09.392807] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.837 [2024-07-16 00:55:09.393055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.837 00:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.095 malloc0 00:18:35.095 00:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.352 00:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IScbN68pi4 00:18:35.609 [2024-07-16 00:55:10.199661] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:35.609 [2024-07-16 00:55:10.199703] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:35.609 [2024-07-16 00:55:10.199752] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:35.609 request: 00:18:35.609 { 00:18:35.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.609 "host": "nqn.2016-06.io.spdk:host1", 00:18:35.609 "psk": "/tmp/tmp.IScbN68pi4", 00:18:35.609 "method": "nvmf_subsystem_add_host", 00:18:35.610 "req_id": 1 00:18:35.610 } 00:18:35.610 Got JSON-RPC error response 00:18:35.610 response: 00:18:35.610 { 00:18:35.610 "code": -32603, 00:18:35.610 "message": "Internal error" 00:18:35.610 } 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2667488 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2667488 ']' 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2667488 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667488 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667488' 00:18:35.610 killing process with pid 2667488 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2667488 00:18:35.610 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2667488 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IScbN68pi4 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2667812 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2667812 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2667812 ']' 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.867 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.867 [2024-07-16 00:55:10.604758] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:35.867 [2024-07-16 00:55:10.604850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.125 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.125 [2024-07-16 00:55:10.688054] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.125 [2024-07-16 00:55:10.837368] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.125 [2024-07-16 00:55:10.837451] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.125 [2024-07-16 00:55:10.837477] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.125 [2024-07-16 00:55:10.837501] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.125 [2024-07-16 00:55:10.837521] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.125 [2024-07-16 00:55:10.837569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IScbN68pi4 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IScbN68pi4 00:18:36.383 00:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:36.640 [2024-07-16 00:55:11.262238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.640 00:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.897 00:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:37.155 [2024-07-16 00:55:11.815751] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.155 [2024-07-16 00:55:11.816005] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.155 00:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:37.413 malloc0 00:18:37.413 00:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.670 00:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IScbN68pi4 00:18:37.928 [2024-07-16 00:55:12.536951] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2668080 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2668080 /var/tmp/bdevperf.sock 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2668080 ']' 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.928 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.928 [2024-07-16 00:55:12.595099] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:37.928 [2024-07-16 00:55:12.595181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668080 ] 00:18:37.928 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.928 [2024-07-16 00:55:12.652289] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.189 [2024-07-16 00:55:12.762551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.189 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.189 00:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.189 00:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IScbN68pi4 00:18:38.447 [2024-07-16 00:55:13.090233] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.447 [2024-07-16 00:55:13.090364] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:38.447 TLSTESTn1 00:18:38.447 00:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:39.016 00:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:39.016 "subsystems": [ 00:18:39.016 { 00:18:39.016 "subsystem": "keyring", 00:18:39.016 "config": [] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "iobuf", 00:18:39.016 "config": [ 00:18:39.016 { 00:18:39.016 "method": "iobuf_set_options", 00:18:39.016 "params": { 00:18:39.016 "small_pool_count": 8192, 00:18:39.016 "large_pool_count": 1024, 00:18:39.016 "small_bufsize": 8192, 00:18:39.016 "large_bufsize": 135168 00:18:39.016 } 00:18:39.016 } 00:18:39.016 ] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "sock", 00:18:39.016 "config": [ 00:18:39.016 { 00:18:39.016 "method": "sock_set_default_impl", 00:18:39.016 "params": { 00:18:39.016 "impl_name": "posix" 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "sock_impl_set_options", 00:18:39.016 "params": { 00:18:39.016 "impl_name": "ssl", 00:18:39.016 "recv_buf_size": 4096, 00:18:39.016 "send_buf_size": 4096, 00:18:39.016 "enable_recv_pipe": true, 00:18:39.016 "enable_quickack": false, 00:18:39.016 "enable_placement_id": 0, 00:18:39.016 "enable_zerocopy_send_server": true, 00:18:39.016 "enable_zerocopy_send_client": false, 00:18:39.016 "zerocopy_threshold": 0, 00:18:39.016 "tls_version": 0, 00:18:39.016 "enable_ktls": false 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "sock_impl_set_options", 00:18:39.016 "params": { 00:18:39.016 "impl_name": "posix", 00:18:39.016 "recv_buf_size": 2097152, 00:18:39.016 "send_buf_size": 2097152, 00:18:39.016 "enable_recv_pipe": true, 00:18:39.016 "enable_quickack": false, 00:18:39.016 "enable_placement_id": 0, 00:18:39.016 "enable_zerocopy_send_server": true, 00:18:39.016 "enable_zerocopy_send_client": false, 00:18:39.016 "zerocopy_threshold": 0, 00:18:39.016 "tls_version": 0, 00:18:39.016 "enable_ktls": false 00:18:39.016 } 00:18:39.016 } 00:18:39.016 ] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "vmd", 00:18:39.016 "config": [] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "accel", 00:18:39.016 "config": [ 00:18:39.016 { 00:18:39.016 "method": "accel_set_options", 00:18:39.016 "params": { 00:18:39.016 "small_cache_size": 128, 00:18:39.016 "large_cache_size": 16, 00:18:39.016 "task_count": 2048, 00:18:39.016 "sequence_count": 2048, 00:18:39.016 "buf_count": 2048 00:18:39.016 } 00:18:39.016 } 00:18:39.016 ] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "bdev", 00:18:39.016 "config": [ 00:18:39.016 { 00:18:39.016 "method": "bdev_set_options", 00:18:39.016 "params": { 00:18:39.016 "bdev_io_pool_size": 65535, 00:18:39.016 "bdev_io_cache_size": 256, 00:18:39.016 "bdev_auto_examine": true, 00:18:39.016 "iobuf_small_cache_size": 128, 00:18:39.016 "iobuf_large_cache_size": 16 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "bdev_raid_set_options", 00:18:39.016 "params": { 00:18:39.016 "process_window_size_kb": 1024 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "bdev_iscsi_set_options", 00:18:39.016 "params": { 00:18:39.016 "timeout_sec": 30 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "bdev_nvme_set_options", 00:18:39.016 "params": { 00:18:39.016 "action_on_timeout": "none", 00:18:39.016 "timeout_us": 0, 00:18:39.016 "timeout_admin_us": 0, 00:18:39.016 "keep_alive_timeout_ms": 10000, 00:18:39.016 "arbitration_burst": 0, 00:18:39.016 "low_priority_weight": 0, 00:18:39.016 "medium_priority_weight": 0, 00:18:39.016 "high_priority_weight": 0, 00:18:39.016 "nvme_adminq_poll_period_us": 10000, 00:18:39.016 "nvme_ioq_poll_period_us": 0, 00:18:39.016 "io_queue_requests": 0, 00:18:39.016 "delay_cmd_submit": true, 00:18:39.016 "transport_retry_count": 4, 00:18:39.016 "bdev_retry_count": 3, 00:18:39.016 "transport_ack_timeout": 0, 00:18:39.016 "ctrlr_loss_timeout_sec": 0, 00:18:39.016 "reconnect_delay_sec": 0, 00:18:39.016 "fast_io_fail_timeout_sec": 0, 00:18:39.016 "disable_auto_failback": false, 00:18:39.016 "generate_uuids": false, 00:18:39.016 "transport_tos": 0, 00:18:39.016 "nvme_error_stat": false, 00:18:39.016 "rdma_srq_size": 0, 00:18:39.016 "io_path_stat": false, 00:18:39.016 "allow_accel_sequence": false, 00:18:39.016 "rdma_max_cq_size": 0, 00:18:39.016 "rdma_cm_event_timeout_ms": 0, 00:18:39.016 "dhchap_digests": [ 00:18:39.016 "sha256", 00:18:39.016 "sha384", 00:18:39.016 "sha512" 00:18:39.016 ], 00:18:39.016 "dhchap_dhgroups": [ 00:18:39.016 "null", 00:18:39.016 "ffdhe2048", 00:18:39.016 "ffdhe3072", 00:18:39.016 "ffdhe4096", 00:18:39.016 "ffdhe6144", 00:18:39.016 "ffdhe8192" 00:18:39.016 ] 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "bdev_nvme_set_hotplug", 00:18:39.016 "params": { 00:18:39.016 "period_us": 100000, 00:18:39.016 "enable": false 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "bdev_malloc_create", 00:18:39.016 "params": { 00:18:39.016 "name": "malloc0", 00:18:39.016 "num_blocks": 8192, 00:18:39.016 "block_size": 4096, 00:18:39.016 "physical_block_size": 4096, 00:18:39.016 "uuid": "47bf3543-c38b-4217-b6ae-0c874b5534ba", 00:18:39.016 "optimal_io_boundary": 0 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "bdev_wait_for_examine" 00:18:39.016 } 00:18:39.016 ] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "nbd", 00:18:39.016 "config": [] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "scheduler", 00:18:39.016 "config": [ 00:18:39.016 { 00:18:39.016 "method": "framework_set_scheduler", 00:18:39.016 "params": { 00:18:39.016 "name": "static" 00:18:39.016 } 00:18:39.016 } 00:18:39.016 ] 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "subsystem": "nvmf", 00:18:39.016 "config": [ 00:18:39.016 { 00:18:39.016 "method": "nvmf_set_config", 00:18:39.016 "params": { 00:18:39.016 "discovery_filter": "match_any", 00:18:39.016 "admin_cmd_passthru": { 00:18:39.016 "identify_ctrlr": false 00:18:39.016 } 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "nvmf_set_max_subsystems", 00:18:39.016 "params": { 00:18:39.016 "max_subsystems": 1024 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "nvmf_set_crdt", 00:18:39.016 "params": { 00:18:39.016 "crdt1": 0, 00:18:39.016 "crdt2": 0, 00:18:39.016 "crdt3": 0 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "nvmf_create_transport", 00:18:39.016 "params": { 00:18:39.016 "trtype": "TCP", 00:18:39.016 "max_queue_depth": 128, 00:18:39.016 "max_io_qpairs_per_ctrlr": 127, 00:18:39.016 "in_capsule_data_size": 4096, 00:18:39.016 "max_io_size": 131072, 00:18:39.016 "io_unit_size": 131072, 00:18:39.016 "max_aq_depth": 128, 00:18:39.016 "num_shared_buffers": 511, 00:18:39.016 "buf_cache_size": 4294967295, 00:18:39.016 "dif_insert_or_strip": false, 00:18:39.016 "zcopy": false, 00:18:39.016 "c2h_success": false, 00:18:39.016 "sock_priority": 0, 00:18:39.016 "abort_timeout_sec": 1, 00:18:39.016 "ack_timeout": 0, 00:18:39.016 "data_wr_pool_size": 0 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "nvmf_create_subsystem", 00:18:39.016 "params": { 00:18:39.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.016 "allow_any_host": false, 00:18:39.016 "serial_number": "SPDK00000000000001", 00:18:39.016 "model_number": "SPDK bdev Controller", 00:18:39.016 "max_namespaces": 10, 00:18:39.016 "min_cntlid": 1, 00:18:39.016 "max_cntlid": 65519, 00:18:39.016 "ana_reporting": false 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "nvmf_subsystem_add_host", 00:18:39.016 "params": { 00:18:39.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.016 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.016 "psk": "/tmp/tmp.IScbN68pi4" 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "nvmf_subsystem_add_ns", 00:18:39.016 "params": { 00:18:39.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.016 "namespace": { 00:18:39.016 "nsid": 1, 00:18:39.016 "bdev_name": "malloc0", 00:18:39.016 "nguid": "47BF3543C38B4217B6AE0C874B5534BA", 00:18:39.016 "uuid": "47bf3543-c38b-4217-b6ae-0c874b5534ba", 00:18:39.016 "no_auto_visible": false 00:18:39.016 } 00:18:39.016 } 00:18:39.016 }, 00:18:39.016 { 00:18:39.016 "method": "nvmf_subsystem_add_listener", 00:18:39.016 "params": { 00:18:39.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.016 "listen_address": { 00:18:39.016 "trtype": "TCP", 00:18:39.016 "adrfam": "IPv4", 00:18:39.016 "traddr": "10.0.0.2", 00:18:39.016 "trsvcid": "4420" 00:18:39.016 }, 00:18:39.016 "secure_channel": true 00:18:39.016 } 00:18:39.016 } 00:18:39.016 ] 00:18:39.016 } 00:18:39.016 ] 00:18:39.016 }' 00:18:39.016 00:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:39.274 00:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:39.274 "subsystems": [ 00:18:39.274 { 00:18:39.274 "subsystem": "keyring", 00:18:39.274 "config": [] 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "subsystem": "iobuf", 00:18:39.274 "config": [ 00:18:39.274 { 00:18:39.274 "method": "iobuf_set_options", 00:18:39.274 "params": { 00:18:39.274 "small_pool_count": 8192, 00:18:39.274 "large_pool_count": 1024, 00:18:39.274 "small_bufsize": 8192, 00:18:39.274 "large_bufsize": 135168 00:18:39.274 } 00:18:39.274 } 00:18:39.274 ] 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "subsystem": "sock", 00:18:39.274 "config": [ 00:18:39.274 { 00:18:39.274 "method": "sock_set_default_impl", 00:18:39.274 "params": { 00:18:39.274 "impl_name": "posix" 00:18:39.274 } 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "method": "sock_impl_set_options", 00:18:39.274 "params": { 00:18:39.274 "impl_name": "ssl", 00:18:39.274 "recv_buf_size": 4096, 00:18:39.274 "send_buf_size": 4096, 00:18:39.274 "enable_recv_pipe": true, 00:18:39.274 "enable_quickack": false, 00:18:39.274 "enable_placement_id": 0, 00:18:39.274 "enable_zerocopy_send_server": true, 00:18:39.274 "enable_zerocopy_send_client": false, 00:18:39.274 "zerocopy_threshold": 0, 00:18:39.274 "tls_version": 0, 00:18:39.274 "enable_ktls": false 00:18:39.274 } 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "method": "sock_impl_set_options", 00:18:39.274 "params": { 00:18:39.274 "impl_name": "posix", 00:18:39.274 "recv_buf_size": 2097152, 00:18:39.274 "send_buf_size": 2097152, 00:18:39.274 "enable_recv_pipe": true, 00:18:39.274 "enable_quickack": false, 00:18:39.274 "enable_placement_id": 0, 00:18:39.274 "enable_zerocopy_send_server": true, 00:18:39.274 "enable_zerocopy_send_client": false, 00:18:39.274 "zerocopy_threshold": 0, 00:18:39.274 "tls_version": 0, 00:18:39.274 "enable_ktls": false 00:18:39.274 } 00:18:39.274 } 00:18:39.274 ] 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "subsystem": "vmd", 00:18:39.274 "config": [] 00:18:39.274 }, 00:18:39.274 { 00:18:39.274 "subsystem": "accel", 00:18:39.274 "config": [ 00:18:39.275 { 00:18:39.275 "method": "accel_set_options", 00:18:39.275 "params": { 00:18:39.275 "small_cache_size": 128, 00:18:39.275 "large_cache_size": 16, 00:18:39.275 "task_count": 2048, 00:18:39.275 "sequence_count": 2048, 00:18:39.275 "buf_count": 2048 00:18:39.275 } 00:18:39.275 } 00:18:39.275 ] 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "subsystem": "bdev", 00:18:39.275 "config": [ 00:18:39.275 { 00:18:39.275 "method": "bdev_set_options", 00:18:39.275 "params": { 00:18:39.275 "bdev_io_pool_size": 65535, 00:18:39.275 "bdev_io_cache_size": 256, 00:18:39.275 "bdev_auto_examine": true, 00:18:39.275 "iobuf_small_cache_size": 128, 00:18:39.275 "iobuf_large_cache_size": 16 00:18:39.275 } 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "method": "bdev_raid_set_options", 00:18:39.275 "params": { 00:18:39.275 "process_window_size_kb": 1024 00:18:39.275 } 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "method": "bdev_iscsi_set_options", 00:18:39.275 "params": { 00:18:39.275 "timeout_sec": 30 00:18:39.275 } 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "method": "bdev_nvme_set_options", 00:18:39.275 "params": { 00:18:39.275 "action_on_timeout": "none", 00:18:39.275 "timeout_us": 0, 00:18:39.275 "timeout_admin_us": 0, 00:18:39.275 "keep_alive_timeout_ms": 10000, 00:18:39.275 "arbitration_burst": 0, 00:18:39.275 "low_priority_weight": 0, 00:18:39.275 "medium_priority_weight": 0, 00:18:39.275 "high_priority_weight": 0, 00:18:39.275 "nvme_adminq_poll_period_us": 10000, 00:18:39.275 "nvme_ioq_poll_period_us": 0, 00:18:39.275 "io_queue_requests": 512, 00:18:39.275 "delay_cmd_submit": true, 00:18:39.275 "transport_retry_count": 4, 00:18:39.275 "bdev_retry_count": 3, 00:18:39.275 "transport_ack_timeout": 0, 00:18:39.275 "ctrlr_loss_timeout_sec": 0, 00:18:39.275 "reconnect_delay_sec": 0, 00:18:39.275 "fast_io_fail_timeout_sec": 0, 00:18:39.275 "disable_auto_failback": false, 00:18:39.275 "generate_uuids": false, 00:18:39.275 "transport_tos": 0, 00:18:39.275 "nvme_error_stat": false, 00:18:39.275 "rdma_srq_size": 0, 00:18:39.275 "io_path_stat": false, 00:18:39.275 "allow_accel_sequence": false, 00:18:39.275 "rdma_max_cq_size": 0, 00:18:39.275 "rdma_cm_event_timeout_ms": 0, 00:18:39.275 "dhchap_digests": [ 00:18:39.275 "sha256", 00:18:39.275 "sha384", 00:18:39.275 "sha512" 00:18:39.275 ], 00:18:39.275 "dhchap_dhgroups": [ 00:18:39.275 "null", 00:18:39.275 "ffdhe2048", 00:18:39.275 "ffdhe3072", 00:18:39.275 "ffdhe4096", 00:18:39.275 "ffdhe6144", 00:18:39.275 "ffdhe8192" 00:18:39.275 ] 00:18:39.275 } 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "method": "bdev_nvme_attach_controller", 00:18:39.275 "params": { 00:18:39.275 "name": "TLSTEST", 00:18:39.275 "trtype": "TCP", 00:18:39.275 "adrfam": "IPv4", 00:18:39.275 "traddr": "10.0.0.2", 00:18:39.275 "trsvcid": "4420", 00:18:39.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.275 "prchk_reftag": false, 00:18:39.275 "prchk_guard": false, 00:18:39.275 "ctrlr_loss_timeout_sec": 0, 00:18:39.275 "reconnect_delay_sec": 0, 00:18:39.275 "fast_io_fail_timeout_sec": 0, 00:18:39.275 "psk": "/tmp/tmp.IScbN68pi4", 00:18:39.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.275 "hdgst": false, 00:18:39.275 "ddgst": false 00:18:39.275 } 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "method": "bdev_nvme_set_hotplug", 00:18:39.275 "params": { 00:18:39.275 "period_us": 100000, 00:18:39.275 "enable": false 00:18:39.275 } 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "method": "bdev_wait_for_examine" 00:18:39.275 } 00:18:39.275 ] 00:18:39.275 }, 00:18:39.275 { 00:18:39.275 "subsystem": "nbd", 00:18:39.275 "config": [] 00:18:39.275 } 00:18:39.275 ] 00:18:39.275 }' 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2668080 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2668080 ']' 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2668080 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2668080 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2668080' 00:18:39.275 killing process with pid 2668080 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2668080 00:18:39.275 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.275 00:18:39.275 Latency(us) 00:18:39.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.275 =================================================================================================================== 00:18:39.275 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.275 [2024-07-16 00:55:13.821280] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:39.275 00:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2668080 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2667812 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2667812 ']' 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2667812 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667812 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667812' 00:18:39.534 killing process with pid 2667812 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2667812 00:18:39.534 [2024-07-16 00:55:14.096107] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:39.534 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2667812 00:18:39.792 00:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:39.792 00:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.792 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.792 00:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:39.792 "subsystems": [ 00:18:39.792 { 00:18:39.792 "subsystem": "keyring", 00:18:39.792 "config": [] 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "subsystem": "iobuf", 00:18:39.792 "config": [ 00:18:39.792 { 00:18:39.792 "method": "iobuf_set_options", 00:18:39.792 "params": { 00:18:39.792 "small_pool_count": 8192, 00:18:39.792 "large_pool_count": 1024, 00:18:39.792 "small_bufsize": 8192, 00:18:39.792 "large_bufsize": 135168 00:18:39.792 } 00:18:39.792 } 00:18:39.792 ] 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "subsystem": "sock", 00:18:39.792 "config": [ 00:18:39.792 { 00:18:39.792 "method": "sock_set_default_impl", 00:18:39.792 "params": { 00:18:39.792 "impl_name": "posix" 00:18:39.792 } 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "method": "sock_impl_set_options", 00:18:39.792 "params": { 00:18:39.792 "impl_name": "ssl", 00:18:39.792 "recv_buf_size": 4096, 00:18:39.792 "send_buf_size": 4096, 00:18:39.792 "enable_recv_pipe": true, 00:18:39.792 "enable_quickack": false, 00:18:39.792 "enable_placement_id": 0, 00:18:39.792 "enable_zerocopy_send_server": true, 00:18:39.792 "enable_zerocopy_send_client": false, 00:18:39.792 "zerocopy_threshold": 0, 00:18:39.792 "tls_version": 0, 00:18:39.792 "enable_ktls": false 00:18:39.792 } 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "method": "sock_impl_set_options", 00:18:39.792 "params": { 00:18:39.792 "impl_name": "posix", 00:18:39.792 "recv_buf_size": 2097152, 00:18:39.792 "send_buf_size": 2097152, 00:18:39.792 "enable_recv_pipe": true, 00:18:39.792 "enable_quickack": false, 00:18:39.792 "enable_placement_id": 0, 00:18:39.792 "enable_zerocopy_send_server": true, 00:18:39.792 "enable_zerocopy_send_client": false, 00:18:39.792 "zerocopy_threshold": 0, 00:18:39.792 "tls_version": 0, 00:18:39.792 "enable_ktls": false 00:18:39.792 } 00:18:39.792 } 00:18:39.792 ] 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "subsystem": "vmd", 00:18:39.792 "config": [] 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "subsystem": "accel", 00:18:39.792 "config": [ 00:18:39.792 { 00:18:39.792 "method": "accel_set_options", 00:18:39.792 "params": { 00:18:39.792 "small_cache_size": 128, 00:18:39.792 "large_cache_size": 16, 00:18:39.792 "task_count": 2048, 00:18:39.792 "sequence_count": 2048, 00:18:39.792 "buf_count": 2048 00:18:39.792 } 00:18:39.792 } 00:18:39.792 ] 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "subsystem": "bdev", 00:18:39.792 "config": [ 00:18:39.792 { 00:18:39.792 "method": "bdev_set_options", 00:18:39.792 "params": { 00:18:39.792 "bdev_io_pool_size": 65535, 00:18:39.792 "bdev_io_cache_size": 256, 00:18:39.792 "bdev_auto_examine": true, 00:18:39.792 "iobuf_small_cache_size": 128, 00:18:39.792 "iobuf_large_cache_size": 16 00:18:39.792 } 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "method": "bdev_raid_set_options", 00:18:39.792 "params": { 00:18:39.792 "process_window_size_kb": 1024 00:18:39.792 } 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "method": "bdev_iscsi_set_options", 00:18:39.792 "params": { 00:18:39.792 "timeout_sec": 30 00:18:39.792 } 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "method": "bdev_nvme_set_options", 00:18:39.792 "params": { 00:18:39.792 "action_on_timeout": "none", 00:18:39.792 "timeout_us": 0, 00:18:39.792 "timeout_admin_us": 0, 00:18:39.792 "keep_alive_timeout_ms": 10000, 00:18:39.792 "arbitration_burst": 0, 00:18:39.792 "low_priority_weight": 0, 00:18:39.792 "medium_priority_weight": 0, 00:18:39.792 "high_priority_weight": 0, 00:18:39.792 "nvme_adminq_poll_period_us": 10000, 00:18:39.792 "nvme_ioq_poll_period_us": 0, 00:18:39.792 "io_queue_requests": 0, 00:18:39.792 "delay_cmd_submit": true, 00:18:39.792 "transport_retry_count": 4, 00:18:39.792 "bdev_retry_count": 3, 00:18:39.792 "transport_ack_timeout": 0, 00:18:39.792 "ctrlr_loss_timeout_sec": 0, 00:18:39.792 "reconnect_delay_sec": 0, 00:18:39.792 "fast_io_fail_timeout_sec": 0, 00:18:39.792 "disable_auto_failback": false, 00:18:39.792 "generate_uuids": false, 00:18:39.792 "transport_tos": 0, 00:18:39.792 "nvme_error_stat": false, 00:18:39.792 "rdma_srq_size": 0, 00:18:39.792 "io_path_stat": false, 00:18:39.792 "allow_accel_sequence": false, 00:18:39.792 "rdma_max_cq_size": 0, 00:18:39.792 "rdma_cm_event_timeout_ms": 0, 00:18:39.792 "dhchap_digests": [ 00:18:39.792 "sha256", 00:18:39.792 "sha384", 00:18:39.792 "sha512" 00:18:39.792 ], 00:18:39.792 "dhchap_dhgroups": [ 00:18:39.792 "null", 00:18:39.792 "ffdhe2048", 00:18:39.792 "ffdhe3072", 00:18:39.792 "ffdhe4096", 00:18:39.792 "ffdhe6144", 00:18:39.792 "ffdhe8192" 00:18:39.792 ] 00:18:39.792 } 00:18:39.792 }, 00:18:39.792 { 00:18:39.792 "method": "bdev_nvme_set_hotplug", 00:18:39.792 "params": { 00:18:39.792 "period_us": 100000, 00:18:39.792 "enable": false 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "bdev_malloc_create", 00:18:39.793 "params": { 00:18:39.793 "name": "malloc0", 00:18:39.793 "num_blocks": 8192, 00:18:39.793 "block_size": 4096, 00:18:39.793 "physical_block_size": 4096, 00:18:39.793 "uuid": "47bf3543-c38b-4217-b6ae-0c874b5534ba", 00:18:39.793 "optimal_io_boundary": 0 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "bdev_wait_for_examine" 00:18:39.793 } 00:18:39.793 ] 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "subsystem": "nbd", 00:18:39.793 "config": [] 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "subsystem": "scheduler", 00:18:39.793 "config": [ 00:18:39.793 { 00:18:39.793 "method": "framework_set_scheduler", 00:18:39.793 "params": { 00:18:39.793 "name": "static" 00:18:39.793 } 00:18:39.793 } 00:18:39.793 ] 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "subsystem": "nvmf", 00:18:39.793 "config": [ 00:18:39.793 { 00:18:39.793 "method": "nvmf_set_config", 00:18:39.793 "params": { 00:18:39.793 "discovery_filter": "match_any", 00:18:39.793 "admin_cmd_passthru": { 00:18:39.793 "identify_ctrlr": false 00:18:39.793 } 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "nvmf_set_max_subsystems", 00:18:39.793 "params": { 00:18:39.793 "max_subsystems": 1024 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "nvmf_set_crdt", 00:18:39.793 "params": { 00:18:39.793 "crdt1": 0, 00:18:39.793 "crdt2": 0, 00:18:39.793 "crdt3": 0 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "nvmf_create_transport", 00:18:39.793 "params": { 00:18:39.793 "trtype": "TCP", 00:18:39.793 "max_queue_depth": 128, 00:18:39.793 "max_io_qpairs_per_ctrlr": 127, 00:18:39.793 "in_capsule_data_size": 4096, 00:18:39.793 "max_io_size": 131072, 00:18:39.793 "io_unit_size": 131072, 00:18:39.793 "max_aq_depth": 128, 00:18:39.793 "num_shared_buffers": 511, 00:18:39.793 "buf_cache_size": 4294967295, 00:18:39.793 "dif_insert_or_strip": false, 00:18:39.793 "zcopy": false, 00:18:39.793 "c2h_success": false, 00:18:39.793 "sock_priority": 0, 00:18:39.793 "abort_timeout_sec": 1, 00:18:39.793 "ack_timeout": 0, 00:18:39.793 "data_wr_pool_size": 0 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "nvmf_create_subsystem", 00:18:39.793 "params": { 00:18:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.793 "allow_any_host": false, 00:18:39.793 "serial_number": "SPDK00000000000001", 00:18:39.793 "model_number": "SPDK bdev Controller", 00:18:39.793 "max_namespaces": 10, 00:18:39.793 "min_cntlid": 1, 00:18:39.793 "max_cntlid": 65519, 00:18:39.793 "ana_reporting": false 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "nvmf_subsystem_add_host", 00:18:39.793 "params": { 00:18:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.793 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.793 "psk": "/tmp/tmp.IScbN68pi4" 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "nvmf_subsystem_add_ns", 00:18:39.793 "params": { 00:18:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.793 "namespace": { 00:18:39.793 "nsid": 1, 00:18:39.793 "bdev_name": "malloc0", 00:18:39.793 "nguid": "47BF3543C38B4217B6AE0C874B5534BA", 00:18:39.793 "uuid": "47bf3543-c38b-4217-b6ae-0c874b5534ba", 00:18:39.793 "no_auto_visible": false 00:18:39.793 } 00:18:39.793 } 00:18:39.793 }, 00:18:39.793 { 00:18:39.793 "method": "nvmf_subsystem_add_listener", 00:18:39.793 "params": { 00:18:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.793 "listen_address": { 00:18:39.793 "trtype": "TCP", 00:18:39.793 "adrfam": "IPv4", 00:18:39.793 "traddr": "10.0.0.2", 00:18:39.793 "trsvcid": "4420" 00:18:39.793 }, 00:18:39.793 "secure_channel": true 00:18:39.793 } 00:18:39.793 } 00:18:39.793 ] 00:18:39.793 } 00:18:39.793 ] 00:18:39.793 }' 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2668357 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2668357 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2668357 ']' 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.793 00:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.793 [2024-07-16 00:55:14.433573] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:39.793 [2024-07-16 00:55:14.433676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.793 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.793 [2024-07-16 00:55:14.500748] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.051 [2024-07-16 00:55:14.613807] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.051 [2024-07-16 00:55:14.613871] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.051 [2024-07-16 00:55:14.613908] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.051 [2024-07-16 00:55:14.613922] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.051 [2024-07-16 00:55:14.613934] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.051 [2024-07-16 00:55:14.614020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.309 [2024-07-16 00:55:14.854444] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.309 [2024-07-16 00:55:14.870379] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:40.309 [2024-07-16 00:55:14.886450] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.309 [2024-07-16 00:55:14.894091] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2668510 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2668510 /var/tmp/bdevperf.sock 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2668510 ']' 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:40.876 "subsystems": [ 00:18:40.876 { 00:18:40.876 "subsystem": "keyring", 00:18:40.876 "config": [] 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "subsystem": "iobuf", 00:18:40.876 "config": [ 00:18:40.876 { 00:18:40.876 "method": "iobuf_set_options", 00:18:40.876 "params": { 00:18:40.876 "small_pool_count": 8192, 00:18:40.876 "large_pool_count": 1024, 00:18:40.876 "small_bufsize": 8192, 00:18:40.876 "large_bufsize": 135168 00:18:40.876 } 00:18:40.876 } 00:18:40.876 ] 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "subsystem": "sock", 00:18:40.876 "config": [ 00:18:40.876 { 00:18:40.876 "method": "sock_set_default_impl", 00:18:40.876 "params": { 00:18:40.876 "impl_name": "posix" 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "sock_impl_set_options", 00:18:40.876 "params": { 00:18:40.876 "impl_name": "ssl", 00:18:40.876 "recv_buf_size": 4096, 00:18:40.876 "send_buf_size": 4096, 00:18:40.876 "enable_recv_pipe": true, 00:18:40.876 "enable_quickack": false, 00:18:40.876 "enable_placement_id": 0, 00:18:40.876 "enable_zerocopy_send_server": true, 00:18:40.876 "enable_zerocopy_send_client": false, 00:18:40.876 "zerocopy_threshold": 0, 00:18:40.876 "tls_version": 0, 00:18:40.876 "enable_ktls": false 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "sock_impl_set_options", 00:18:40.876 "params": { 00:18:40.876 "impl_name": "posix", 00:18:40.876 "recv_buf_size": 2097152, 00:18:40.876 "send_buf_size": 2097152, 00:18:40.876 "enable_recv_pipe": true, 00:18:40.876 "enable_quickack": false, 00:18:40.876 "enable_placement_id": 0, 00:18:40.876 "enable_zerocopy_send_server": true, 00:18:40.876 "enable_zerocopy_send_client": false, 00:18:40.876 "zerocopy_threshold": 0, 00:18:40.876 "tls_version": 0, 00:18:40.876 "enable_ktls": false 00:18:40.876 } 00:18:40.876 } 00:18:40.876 ] 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "subsystem": "vmd", 00:18:40.876 "config": [] 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "subsystem": "accel", 00:18:40.876 "config": [ 00:18:40.876 { 00:18:40.876 "method": "accel_set_options", 00:18:40.876 "params": { 00:18:40.876 "small_cache_size": 128, 00:18:40.876 "large_cache_size": 16, 00:18:40.876 "task_count": 2048, 00:18:40.876 "sequence_count": 2048, 00:18:40.876 "buf_count": 2048 00:18:40.876 } 00:18:40.876 } 00:18:40.876 ] 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "subsystem": "bdev", 00:18:40.876 "config": [ 00:18:40.876 { 00:18:40.876 "method": "bdev_set_options", 00:18:40.876 "params": { 00:18:40.876 "bdev_io_pool_size": 65535, 00:18:40.876 "bdev_io_cache_size": 256, 00:18:40.876 "bdev_auto_examine": true, 00:18:40.876 "iobuf_small_cache_size": 128, 00:18:40.876 "iobuf_large_cache_size": 16 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "bdev_raid_set_options", 00:18:40.876 "params": { 00:18:40.876 "process_window_size_kb": 1024 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "bdev_iscsi_set_options", 00:18:40.876 "params": { 00:18:40.876 "timeout_sec": 30 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "bdev_nvme_set_options", 00:18:40.876 "params": { 00:18:40.876 "action_on_timeout": "none", 00:18:40.876 "timeout_us": 0, 00:18:40.876 "timeout_admin_us": 0, 00:18:40.876 "keep_alive_timeout_ms": 10000, 00:18:40.876 "arbitration_burst": 0, 00:18:40.876 "low_priority_weight": 0, 00:18:40.876 "medium_priority_weight": 0, 00:18:40.876 "high_priority_weight": 0, 00:18:40.876 "nvme_adminq_poll_period_us": 10000, 00:18:40.876 "nvme_ioq_poll_period_us": 0, 00:18:40.876 "io_queue_requests": 512, 00:18:40.876 "delay_cmd_submit": true, 00:18:40.876 "transport_retry_count": 4, 00:18:40.876 "bdev_retry_count": 3, 00:18:40.876 "transport_ack_timeout": 0, 00:18:40.876 "ctrlr_loss_timeout_sec": 0, 00:18:40.876 "reconnect_delay_sec": 0, 00:18:40.876 "fast_io_fail_timeout_sec": 0, 00:18:40.876 "disable_auto_failback": false, 00:18:40.876 "generate_uuids": false, 00:18:40.876 "transport_tos": 0, 00:18:40.876 "nvme_error_stat": false, 00:18:40.876 "rdma_srq_size": 0, 00:18:40.876 "io_path_stat": false, 00:18:40.876 "allow_accel_sequence": false, 00:18:40.876 "rdma_max_cq_size": 0, 00:18:40.876 "rdma_cm_event_timeout_ms": 0, 00:18:40.876 "dhchap_digests": [ 00:18:40.876 "sha256", 00:18:40.876 "sha384", 00:18:40.876 "sha512" 00:18:40.876 ], 00:18:40.876 "dhchap_dhgroups": [ 00:18:40.876 "null", 00:18:40.876 "ffdhe2048", 00:18:40.876 "ffdhe3072", 00:18:40.876 "ffdhe4096", 00:18:40.876 "ffdhe6144", 00:18:40.876 "ffdhe8192" 00:18:40.876 ] 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "bdev_nvme_attach_controller", 00:18:40.876 "params": { 00:18:40.876 "name": "TLSTEST", 00:18:40.876 "trtype": "TCP", 00:18:40.876 "adrfam": "IPv4", 00:18:40.876 "traddr": "10.0.0.2", 00:18:40.876 "trsvcid": "4420", 00:18:40.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.876 "prchk_reftag": false, 00:18:40.876 "prchk_guard": false, 00:18:40.876 "ctrlr_loss_timeout_sec": 0, 00:18:40.876 "reconnect_delay_sec": 0, 00:18:40.876 "fast_io_fail_timeout_sec": 0, 00:18:40.876 "psk": "/tmp/tmp.IScbN68pi4", 00:18:40.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.876 "hdgst": false, 00:18:40.876 "ddgst": false 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "bdev_nvme_set_hotplug", 00:18:40.876 "params": { 00:18:40.876 "period_us": 100000, 00:18:40.876 "enable": false 00:18:40.876 } 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "method": "bdev_wait_for_examine" 00:18:40.876 } 00:18:40.876 ] 00:18:40.876 }, 00:18:40.876 { 00:18:40.876 "subsystem": "nbd", 00:18:40.876 "config": [] 00:18:40.876 } 00:18:40.876 ] 00:18:40.876 }' 00:18:40.876 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.877 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.877 00:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.877 [2024-07-16 00:55:15.440823] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:40.877 [2024-07-16 00:55:15.440934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668510 ] 00:18:40.877 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.877 [2024-07-16 00:55:15.499289] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.877 [2024-07-16 00:55:15.607104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.135 [2024-07-16 00:55:15.781442] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.135 [2024-07-16 00:55:15.781572] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:41.700 00:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.700 00:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:41.700 00:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.957 Running I/O for 10 seconds... 00:18:51.915 00:18:51.915 Latency(us) 00:18:51.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.915 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.915 Verification LBA range: start 0x0 length 0x2000 00:18:51.915 TLSTESTn1 : 10.07 1293.25 5.05 0.00 0.00 98652.65 11262.48 132819.63 00:18:51.915 =================================================================================================================== 00:18:51.915 Total : 1293.25 5.05 0.00 0.00 98652.65 11262.48 132819.63 00:18:51.915 0 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2668510 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2668510 ']' 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2668510 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2668510 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2668510' 00:18:51.915 killing process with pid 2668510 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2668510 00:18:51.915 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.915 00:18:51.915 Latency(us) 00:18:51.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.915 =================================================================================================================== 00:18:51.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.915 [2024-07-16 00:55:26.665881] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:51.915 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2668510 00:18:52.173 00:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2668357 00:18:52.173 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2668357 ']' 00:18:52.173 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2668357 00:18:52.173 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:52.173 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.431 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2668357 00:18:52.431 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:52.431 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:52.431 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2668357' 00:18:52.431 killing process with pid 2668357 00:18:52.431 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2668357 00:18:52.431 [2024-07-16 00:55:26.957221] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:52.431 00:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2668357 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2669839 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2669839 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2669839 ']' 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.689 00:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.689 [2024-07-16 00:55:27.288402] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:52.689 [2024-07-16 00:55:27.288486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.689 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.689 [2024-07-16 00:55:27.356033] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.947 [2024-07-16 00:55:27.470246] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.948 [2024-07-16 00:55:27.470294] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.948 [2024-07-16 00:55:27.470315] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.948 [2024-07-16 00:55:27.470326] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.948 [2024-07-16 00:55:27.470335] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.948 [2024-07-16 00:55:27.470360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IScbN68pi4 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IScbN68pi4 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.882 [2024-07-16 00:55:28.570740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.882 00:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:54.140 00:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:54.399 [2024-07-16 00:55:29.120236] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.399 [2024-07-16 00:55:29.120488] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.399 00:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:54.658 malloc0 00:18:54.658 00:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.916 00:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IScbN68pi4 00:18:55.173 [2024-07-16 00:55:29.845407] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2670161 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2670161 /var/tmp/bdevperf.sock 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2670161 ']' 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.173 00:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.173 [2024-07-16 00:55:29.907371] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:55.173 [2024-07-16 00:55:29.907472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670161 ] 00:18:55.451 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.451 [2024-07-16 00:55:29.972263] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.451 [2024-07-16 00:55:30.092250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.724 00:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.724 00:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:55.724 00:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IScbN68pi4 00:18:55.724 00:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:55.982 [2024-07-16 00:55:30.689143] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.240 nvme0n1 00:18:56.240 00:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.240 Running I/O for 1 seconds... 00:18:57.614 00:18:57.614 Latency(us) 00:18:57.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.614 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:57.614 Verification LBA range: start 0x0 length 0x2000 00:18:57.614 nvme0n1 : 1.06 1697.63 6.63 0.00 0.00 73638.81 5971.06 101750.71 00:18:57.614 =================================================================================================================== 00:18:57.614 Total : 1697.63 6.63 0.00 0.00 73638.81 5971.06 101750.71 00:18:57.614 0 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2670161 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2670161 ']' 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2670161 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2670161 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2670161' 00:18:57.614 killing process with pid 2670161 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2670161 00:18:57.614 Received shutdown signal, test time was about 1.000000 seconds 00:18:57.614 00:18:57.614 Latency(us) 00:18:57.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.614 =================================================================================================================== 00:18:57.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.614 00:55:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2670161 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2669839 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2669839 ']' 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2669839 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2669839 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2669839' 00:18:57.614 killing process with pid 2669839 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2669839 00:18:57.614 [2024-07-16 00:55:32.288986] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:57.614 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2669839 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2670537 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2670537 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2670537 ']' 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.874 00:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.132 [2024-07-16 00:55:32.638934] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:58.132 [2024-07-16 00:55:32.639040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.132 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.132 [2024-07-16 00:55:32.707616] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.132 [2024-07-16 00:55:32.820797] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.133 [2024-07-16 00:55:32.820865] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.133 [2024-07-16 00:55:32.820898] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.133 [2024-07-16 00:55:32.820911] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.133 [2024-07-16 00:55:32.820922] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.133 [2024-07-16 00:55:32.820954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.064 [2024-07-16 00:55:33.596235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.064 malloc0 00:18:59.064 [2024-07-16 00:55:33.627978] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.064 [2024-07-16 00:55:33.628253] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2670685 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2670685 /var/tmp/bdevperf.sock 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2670685 ']' 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.064 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.065 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.065 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.065 [2024-07-16 00:55:33.698585] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:18:59.065 [2024-07-16 00:55:33.698644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670685 ] 00:18:59.065 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.065 [2024-07-16 00:55:33.760053] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.322 [2024-07-16 00:55:33.875772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.322 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.322 00:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:59.322 00:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IScbN68pi4 00:18:59.578 00:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:59.836 [2024-07-16 00:55:34.462484] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.836 nvme0n1 00:18:59.836 00:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.093 Running I/O for 1 seconds... 00:19:01.024 00:19:01.024 Latency(us) 00:19:01.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.024 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.024 Verification LBA range: start 0x0 length 0x2000 00:19:01.024 nvme0n1 : 1.07 1644.34 6.42 0.00 0.00 75984.69 6505.05 113401.55 00:19:01.024 =================================================================================================================== 00:19:01.024 Total : 1644.34 6.42 0.00 0.00 75984.69 6505.05 113401.55 00:19:01.024 0 00:19:01.024 00:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:01.024 00:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.024 00:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.282 00:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.282 00:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:01.282 "subsystems": [ 00:19:01.282 { 00:19:01.282 "subsystem": "keyring", 00:19:01.282 "config": [ 00:19:01.282 { 00:19:01.282 "method": "keyring_file_add_key", 00:19:01.282 "params": { 00:19:01.282 "name": "key0", 00:19:01.282 "path": "/tmp/tmp.IScbN68pi4" 00:19:01.282 } 00:19:01.282 } 00:19:01.282 ] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "iobuf", 00:19:01.282 "config": [ 00:19:01.282 { 00:19:01.282 "method": "iobuf_set_options", 00:19:01.282 "params": { 00:19:01.282 "small_pool_count": 8192, 00:19:01.282 "large_pool_count": 1024, 00:19:01.282 "small_bufsize": 8192, 00:19:01.282 "large_bufsize": 135168 00:19:01.282 } 00:19:01.282 } 00:19:01.282 ] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "sock", 00:19:01.282 "config": [ 00:19:01.282 { 00:19:01.282 "method": "sock_set_default_impl", 00:19:01.282 "params": { 00:19:01.282 "impl_name": "posix" 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "sock_impl_set_options", 00:19:01.282 "params": { 00:19:01.282 "impl_name": "ssl", 00:19:01.282 "recv_buf_size": 4096, 00:19:01.282 "send_buf_size": 4096, 00:19:01.282 "enable_recv_pipe": true, 00:19:01.282 "enable_quickack": false, 00:19:01.282 "enable_placement_id": 0, 00:19:01.282 "enable_zerocopy_send_server": true, 00:19:01.282 "enable_zerocopy_send_client": false, 00:19:01.282 "zerocopy_threshold": 0, 00:19:01.282 "tls_version": 0, 00:19:01.282 "enable_ktls": false 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "sock_impl_set_options", 00:19:01.282 "params": { 00:19:01.282 "impl_name": "posix", 00:19:01.282 "recv_buf_size": 2097152, 00:19:01.282 "send_buf_size": 2097152, 00:19:01.282 "enable_recv_pipe": true, 00:19:01.282 "enable_quickack": false, 00:19:01.282 "enable_placement_id": 0, 00:19:01.282 "enable_zerocopy_send_server": true, 00:19:01.282 "enable_zerocopy_send_client": false, 00:19:01.282 "zerocopy_threshold": 0, 00:19:01.282 "tls_version": 0, 00:19:01.282 "enable_ktls": false 00:19:01.282 } 00:19:01.282 } 00:19:01.282 ] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "vmd", 00:19:01.282 "config": [] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "accel", 00:19:01.282 "config": [ 00:19:01.282 { 00:19:01.282 "method": "accel_set_options", 00:19:01.282 "params": { 00:19:01.282 "small_cache_size": 128, 00:19:01.282 "large_cache_size": 16, 00:19:01.282 "task_count": 2048, 00:19:01.282 "sequence_count": 2048, 00:19:01.282 "buf_count": 2048 00:19:01.282 } 00:19:01.282 } 00:19:01.282 ] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "bdev", 00:19:01.282 "config": [ 00:19:01.282 { 00:19:01.282 "method": "bdev_set_options", 00:19:01.282 "params": { 00:19:01.282 "bdev_io_pool_size": 65535, 00:19:01.282 "bdev_io_cache_size": 256, 00:19:01.282 "bdev_auto_examine": true, 00:19:01.282 "iobuf_small_cache_size": 128, 00:19:01.282 "iobuf_large_cache_size": 16 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "bdev_raid_set_options", 00:19:01.282 "params": { 00:19:01.282 "process_window_size_kb": 1024 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "bdev_iscsi_set_options", 00:19:01.282 "params": { 00:19:01.282 "timeout_sec": 30 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "bdev_nvme_set_options", 00:19:01.282 "params": { 00:19:01.282 "action_on_timeout": "none", 00:19:01.282 "timeout_us": 0, 00:19:01.282 "timeout_admin_us": 0, 00:19:01.282 "keep_alive_timeout_ms": 10000, 00:19:01.282 "arbitration_burst": 0, 00:19:01.282 "low_priority_weight": 0, 00:19:01.282 "medium_priority_weight": 0, 00:19:01.282 "high_priority_weight": 0, 00:19:01.282 "nvme_adminq_poll_period_us": 10000, 00:19:01.282 "nvme_ioq_poll_period_us": 0, 00:19:01.282 "io_queue_requests": 0, 00:19:01.282 "delay_cmd_submit": true, 00:19:01.282 "transport_retry_count": 4, 00:19:01.282 "bdev_retry_count": 3, 00:19:01.282 "transport_ack_timeout": 0, 00:19:01.282 "ctrlr_loss_timeout_sec": 0, 00:19:01.282 "reconnect_delay_sec": 0, 00:19:01.282 "fast_io_fail_timeout_sec": 0, 00:19:01.282 "disable_auto_failback": false, 00:19:01.282 "generate_uuids": false, 00:19:01.282 "transport_tos": 0, 00:19:01.282 "nvme_error_stat": false, 00:19:01.282 "rdma_srq_size": 0, 00:19:01.282 "io_path_stat": false, 00:19:01.282 "allow_accel_sequence": false, 00:19:01.282 "rdma_max_cq_size": 0, 00:19:01.282 "rdma_cm_event_timeout_ms": 0, 00:19:01.282 "dhchap_digests": [ 00:19:01.282 "sha256", 00:19:01.282 "sha384", 00:19:01.282 "sha512" 00:19:01.282 ], 00:19:01.282 "dhchap_dhgroups": [ 00:19:01.282 "null", 00:19:01.282 "ffdhe2048", 00:19:01.282 "ffdhe3072", 00:19:01.282 "ffdhe4096", 00:19:01.282 "ffdhe6144", 00:19:01.282 "ffdhe8192" 00:19:01.282 ] 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "bdev_nvme_set_hotplug", 00:19:01.282 "params": { 00:19:01.282 "period_us": 100000, 00:19:01.282 "enable": false 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "bdev_malloc_create", 00:19:01.282 "params": { 00:19:01.282 "name": "malloc0", 00:19:01.282 "num_blocks": 8192, 00:19:01.282 "block_size": 4096, 00:19:01.282 "physical_block_size": 4096, 00:19:01.282 "uuid": "bca5b199-c232-4249-b7ee-b2a19088b37c", 00:19:01.282 "optimal_io_boundary": 0 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "bdev_wait_for_examine" 00:19:01.282 } 00:19:01.282 ] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "nbd", 00:19:01.282 "config": [] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "scheduler", 00:19:01.282 "config": [ 00:19:01.282 { 00:19:01.282 "method": "framework_set_scheduler", 00:19:01.282 "params": { 00:19:01.282 "name": "static" 00:19:01.282 } 00:19:01.282 } 00:19:01.282 ] 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "subsystem": "nvmf", 00:19:01.282 "config": [ 00:19:01.282 { 00:19:01.282 "method": "nvmf_set_config", 00:19:01.282 "params": { 00:19:01.282 "discovery_filter": "match_any", 00:19:01.282 "admin_cmd_passthru": { 00:19:01.282 "identify_ctrlr": false 00:19:01.282 } 00:19:01.282 } 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "method": "nvmf_set_max_subsystems", 00:19:01.282 "params": { 00:19:01.282 "max_subsystems": 1024 00:19:01.282 } 00:19:01.282 }, 00:19:01.283 { 00:19:01.283 "method": "nvmf_set_crdt", 00:19:01.283 "params": { 00:19:01.283 "crdt1": 0, 00:19:01.283 "crdt2": 0, 00:19:01.283 "crdt3": 0 00:19:01.283 } 00:19:01.283 }, 00:19:01.283 { 00:19:01.283 "method": "nvmf_create_transport", 00:19:01.283 "params": { 00:19:01.283 "trtype": "TCP", 00:19:01.283 "max_queue_depth": 128, 00:19:01.283 "max_io_qpairs_per_ctrlr": 127, 00:19:01.283 "in_capsule_data_size": 4096, 00:19:01.283 "max_io_size": 131072, 00:19:01.283 "io_unit_size": 131072, 00:19:01.283 "max_aq_depth": 128, 00:19:01.283 "num_shared_buffers": 511, 00:19:01.283 "buf_cache_size": 4294967295, 00:19:01.283 "dif_insert_or_strip": false, 00:19:01.283 "zcopy": false, 00:19:01.283 "c2h_success": false, 00:19:01.283 "sock_priority": 0, 00:19:01.283 "abort_timeout_sec": 1, 00:19:01.283 "ack_timeout": 0, 00:19:01.283 "data_wr_pool_size": 0 00:19:01.283 } 00:19:01.283 }, 00:19:01.283 { 00:19:01.283 "method": "nvmf_create_subsystem", 00:19:01.283 "params": { 00:19:01.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.283 "allow_any_host": false, 00:19:01.283 "serial_number": "00000000000000000000", 00:19:01.283 "model_number": "SPDK bdev Controller", 00:19:01.283 "max_namespaces": 32, 00:19:01.283 "min_cntlid": 1, 00:19:01.283 "max_cntlid": 65519, 00:19:01.283 "ana_reporting": false 00:19:01.283 } 00:19:01.283 }, 00:19:01.283 { 00:19:01.283 "method": "nvmf_subsystem_add_host", 00:19:01.283 "params": { 00:19:01.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.283 "host": "nqn.2016-06.io.spdk:host1", 00:19:01.283 "psk": "key0" 00:19:01.283 } 00:19:01.283 }, 00:19:01.283 { 00:19:01.283 "method": "nvmf_subsystem_add_ns", 00:19:01.283 "params": { 00:19:01.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.283 "namespace": { 00:19:01.283 "nsid": 1, 00:19:01.283 "bdev_name": "malloc0", 00:19:01.283 "nguid": "BCA5B199C2324249B7EEB2A19088B37C", 00:19:01.283 "uuid": "bca5b199-c232-4249-b7ee-b2a19088b37c", 00:19:01.283 "no_auto_visible": false 00:19:01.283 } 00:19:01.283 } 00:19:01.283 }, 00:19:01.283 { 00:19:01.283 "method": "nvmf_subsystem_add_listener", 00:19:01.283 "params": { 00:19:01.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.283 "listen_address": { 00:19:01.283 "trtype": "TCP", 00:19:01.283 "adrfam": "IPv4", 00:19:01.283 "traddr": "10.0.0.2", 00:19:01.283 "trsvcid": "4420" 00:19:01.283 }, 00:19:01.283 "secure_channel": false, 00:19:01.283 "sock_impl": "ssl" 00:19:01.283 } 00:19:01.283 } 00:19:01.283 ] 00:19:01.283 } 00:19:01.283 ] 00:19:01.283 }' 00:19:01.283 00:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:01.541 "subsystems": [ 00:19:01.541 { 00:19:01.541 "subsystem": "keyring", 00:19:01.541 "config": [ 00:19:01.541 { 00:19:01.541 "method": "keyring_file_add_key", 00:19:01.541 "params": { 00:19:01.541 "name": "key0", 00:19:01.541 "path": "/tmp/tmp.IScbN68pi4" 00:19:01.541 } 00:19:01.541 } 00:19:01.541 ] 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "subsystem": "iobuf", 00:19:01.541 "config": [ 00:19:01.541 { 00:19:01.541 "method": "iobuf_set_options", 00:19:01.541 "params": { 00:19:01.541 "small_pool_count": 8192, 00:19:01.541 "large_pool_count": 1024, 00:19:01.541 "small_bufsize": 8192, 00:19:01.541 "large_bufsize": 135168 00:19:01.541 } 00:19:01.541 } 00:19:01.541 ] 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "subsystem": "sock", 00:19:01.541 "config": [ 00:19:01.541 { 00:19:01.541 "method": "sock_set_default_impl", 00:19:01.541 "params": { 00:19:01.541 "impl_name": "posix" 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "sock_impl_set_options", 00:19:01.541 "params": { 00:19:01.541 "impl_name": "ssl", 00:19:01.541 "recv_buf_size": 4096, 00:19:01.541 "send_buf_size": 4096, 00:19:01.541 "enable_recv_pipe": true, 00:19:01.541 "enable_quickack": false, 00:19:01.541 "enable_placement_id": 0, 00:19:01.541 "enable_zerocopy_send_server": true, 00:19:01.541 "enable_zerocopy_send_client": false, 00:19:01.541 "zerocopy_threshold": 0, 00:19:01.541 "tls_version": 0, 00:19:01.541 "enable_ktls": false 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "sock_impl_set_options", 00:19:01.541 "params": { 00:19:01.541 "impl_name": "posix", 00:19:01.541 "recv_buf_size": 2097152, 00:19:01.541 "send_buf_size": 2097152, 00:19:01.541 "enable_recv_pipe": true, 00:19:01.541 "enable_quickack": false, 00:19:01.541 "enable_placement_id": 0, 00:19:01.541 "enable_zerocopy_send_server": true, 00:19:01.541 "enable_zerocopy_send_client": false, 00:19:01.541 "zerocopy_threshold": 0, 00:19:01.541 "tls_version": 0, 00:19:01.541 "enable_ktls": false 00:19:01.541 } 00:19:01.541 } 00:19:01.541 ] 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "subsystem": "vmd", 00:19:01.541 "config": [] 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "subsystem": "accel", 00:19:01.541 "config": [ 00:19:01.541 { 00:19:01.541 "method": "accel_set_options", 00:19:01.541 "params": { 00:19:01.541 "small_cache_size": 128, 00:19:01.541 "large_cache_size": 16, 00:19:01.541 "task_count": 2048, 00:19:01.541 "sequence_count": 2048, 00:19:01.541 "buf_count": 2048 00:19:01.541 } 00:19:01.541 } 00:19:01.541 ] 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "subsystem": "bdev", 00:19:01.541 "config": [ 00:19:01.541 { 00:19:01.541 "method": "bdev_set_options", 00:19:01.541 "params": { 00:19:01.541 "bdev_io_pool_size": 65535, 00:19:01.541 "bdev_io_cache_size": 256, 00:19:01.541 "bdev_auto_examine": true, 00:19:01.541 "iobuf_small_cache_size": 128, 00:19:01.541 "iobuf_large_cache_size": 16 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "bdev_raid_set_options", 00:19:01.541 "params": { 00:19:01.541 "process_window_size_kb": 1024 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "bdev_iscsi_set_options", 00:19:01.541 "params": { 00:19:01.541 "timeout_sec": 30 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "bdev_nvme_set_options", 00:19:01.541 "params": { 00:19:01.541 "action_on_timeout": "none", 00:19:01.541 "timeout_us": 0, 00:19:01.541 "timeout_admin_us": 0, 00:19:01.541 "keep_alive_timeout_ms": 10000, 00:19:01.541 "arbitration_burst": 0, 00:19:01.541 "low_priority_weight": 0, 00:19:01.541 "medium_priority_weight": 0, 00:19:01.541 "high_priority_weight": 0, 00:19:01.541 "nvme_adminq_poll_period_us": 10000, 00:19:01.541 "nvme_ioq_poll_period_us": 0, 00:19:01.541 "io_queue_requests": 512, 00:19:01.541 "delay_cmd_submit": true, 00:19:01.541 "transport_retry_count": 4, 00:19:01.541 "bdev_retry_count": 3, 00:19:01.541 "transport_ack_timeout": 0, 00:19:01.541 "ctrlr_loss_timeout_sec": 0, 00:19:01.541 "reconnect_delay_sec": 0, 00:19:01.541 "fast_io_fail_timeout_sec": 0, 00:19:01.541 "disable_auto_failback": false, 00:19:01.541 "generate_uuids": false, 00:19:01.541 "transport_tos": 0, 00:19:01.541 "nvme_error_stat": false, 00:19:01.541 "rdma_srq_size": 0, 00:19:01.541 "io_path_stat": false, 00:19:01.541 "allow_accel_sequence": false, 00:19:01.541 "rdma_max_cq_size": 0, 00:19:01.541 "rdma_cm_event_timeout_ms": 0, 00:19:01.541 "dhchap_digests": [ 00:19:01.541 "sha256", 00:19:01.541 "sha384", 00:19:01.541 "sha512" 00:19:01.541 ], 00:19:01.541 "dhchap_dhgroups": [ 00:19:01.541 "null", 00:19:01.541 "ffdhe2048", 00:19:01.541 "ffdhe3072", 00:19:01.541 "ffdhe4096", 00:19:01.541 "ffdhe6144", 00:19:01.541 "ffdhe8192" 00:19:01.541 ] 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "bdev_nvme_attach_controller", 00:19:01.541 "params": { 00:19:01.541 "name": "nvme0", 00:19:01.541 "trtype": "TCP", 00:19:01.541 "adrfam": "IPv4", 00:19:01.541 "traddr": "10.0.0.2", 00:19:01.541 "trsvcid": "4420", 00:19:01.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.541 "prchk_reftag": false, 00:19:01.541 "prchk_guard": false, 00:19:01.541 "ctrlr_loss_timeout_sec": 0, 00:19:01.541 "reconnect_delay_sec": 0, 00:19:01.541 "fast_io_fail_timeout_sec": 0, 00:19:01.541 "psk": "key0", 00:19:01.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.541 "hdgst": false, 00:19:01.541 "ddgst": false 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "bdev_nvme_set_hotplug", 00:19:01.541 "params": { 00:19:01.541 "period_us": 100000, 00:19:01.541 "enable": false 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "bdev_enable_histogram", 00:19:01.541 "params": { 00:19:01.541 "name": "nvme0n1", 00:19:01.541 "enable": true 00:19:01.541 } 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "method": "bdev_wait_for_examine" 00:19:01.541 } 00:19:01.541 ] 00:19:01.541 }, 00:19:01.541 { 00:19:01.541 "subsystem": "nbd", 00:19:01.541 "config": [] 00:19:01.541 } 00:19:01.541 ] 00:19:01.541 }' 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2670685 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2670685 ']' 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2670685 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2670685 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2670685' 00:19:01.541 killing process with pid 2670685 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2670685 00:19:01.541 Received shutdown signal, test time was about 1.000000 seconds 00:19:01.541 00:19:01.541 Latency(us) 00:19:01.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.541 =================================================================================================================== 00:19:01.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.541 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2670685 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2670537 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2670537 ']' 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2670537 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2670537 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2670537' 00:19:01.799 killing process with pid 2670537 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2670537 00:19:01.799 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2670537 00:19:02.057 00:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:02.057 00:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:02.057 00:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:02.057 "subsystems": [ 00:19:02.057 { 00:19:02.057 "subsystem": "keyring", 00:19:02.057 "config": [ 00:19:02.057 { 00:19:02.057 "method": "keyring_file_add_key", 00:19:02.057 "params": { 00:19:02.057 "name": "key0", 00:19:02.057 "path": "/tmp/tmp.IScbN68pi4" 00:19:02.057 } 00:19:02.057 } 00:19:02.057 ] 00:19:02.057 }, 00:19:02.057 { 00:19:02.057 "subsystem": "iobuf", 00:19:02.057 "config": [ 00:19:02.057 { 00:19:02.057 "method": "iobuf_set_options", 00:19:02.057 "params": { 00:19:02.057 "small_pool_count": 8192, 00:19:02.057 "large_pool_count": 1024, 00:19:02.057 "small_bufsize": 8192, 00:19:02.057 "large_bufsize": 135168 00:19:02.057 } 00:19:02.057 } 00:19:02.057 ] 00:19:02.057 }, 00:19:02.057 { 00:19:02.057 "subsystem": "sock", 00:19:02.057 "config": [ 00:19:02.057 { 00:19:02.057 "method": "sock_set_default_impl", 00:19:02.057 "params": { 00:19:02.057 "impl_name": "posix" 00:19:02.057 } 00:19:02.057 }, 00:19:02.057 { 00:19:02.057 "method": "sock_impl_set_options", 00:19:02.057 "params": { 00:19:02.057 "impl_name": "ssl", 00:19:02.057 "recv_buf_size": 4096, 00:19:02.057 "send_buf_size": 4096, 00:19:02.057 "enable_recv_pipe": true, 00:19:02.057 "enable_quickack": false, 00:19:02.058 "enable_placement_id": 0, 00:19:02.058 "enable_zerocopy_send_server": true, 00:19:02.058 "enable_zerocopy_send_client": false, 00:19:02.058 "zerocopy_threshold": 0, 00:19:02.058 "tls_version": 0, 00:19:02.058 "enable_ktls": false 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "sock_impl_set_options", 00:19:02.058 "params": { 00:19:02.058 "impl_name": "posix", 00:19:02.058 "recv_buf_size": 2097152, 00:19:02.058 "send_buf_size": 2097152, 00:19:02.058 "enable_recv_pipe": true, 00:19:02.058 "enable_quickack": false, 00:19:02.058 "enable_placement_id": 0, 00:19:02.058 "enable_zerocopy_send_server": true, 00:19:02.058 "enable_zerocopy_send_client": false, 00:19:02.058 "zerocopy_threshold": 0, 00:19:02.058 "tls_version": 0, 00:19:02.058 "enable_ktls": false 00:19:02.058 } 00:19:02.058 } 00:19:02.058 ] 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "subsystem": "vmd", 00:19:02.058 "config": [] 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "subsystem": "accel", 00:19:02.058 "config": [ 00:19:02.058 { 00:19:02.058 "method": "accel_set_options", 00:19:02.058 "params": { 00:19:02.058 "small_cache_size": 128, 00:19:02.058 "large_cache_size": 16, 00:19:02.058 "task_count": 2048, 00:19:02.058 "sequence_count": 2048, 00:19:02.058 "buf_count": 2048 00:19:02.058 } 00:19:02.058 } 00:19:02.058 ] 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "subsystem": "bdev", 00:19:02.058 "config": [ 00:19:02.058 { 00:19:02.058 "method": "bdev_set_options", 00:19:02.058 "params": { 00:19:02.058 "bdev_io_pool_size": 65535, 00:19:02.058 "bdev_io_cache_size": 256, 00:19:02.058 "bdev_auto_examine": true, 00:19:02.058 "iobuf_small_cache_size": 128, 00:19:02.058 "iobuf_large_cache_size": 16 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "bdev_raid_set_options", 00:19:02.058 "params": { 00:19:02.058 "process_window_size_kb": 1024 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "bdev_iscsi_set_options", 00:19:02.058 "params": { 00:19:02.058 "timeout_sec": 30 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "bdev_nvme_set_options", 00:19:02.058 "params": { 00:19:02.058 "action_on_timeout": "none", 00:19:02.058 "timeout_us": 0, 00:19:02.058 "timeout_admin_us": 0, 00:19:02.058 "keep_alive_timeout_ms": 10000, 00:19:02.058 "arbitration_burst": 0, 00:19:02.058 "low_priority_weight": 0, 00:19:02.058 "medium_priority_weight": 0, 00:19:02.058 "high_priority_weight": 0, 00:19:02.058 "nvme_adminq_poll_period_us": 10000, 00:19:02.058 "nvme_ioq_poll_period_us": 0, 00:19:02.058 "io_queue_requests": 0, 00:19:02.058 "delay_cmd_submit": true, 00:19:02.058 "transport_retry_count": 4, 00:19:02.058 "bdev_retry_count": 3, 00:19:02.058 "transport_ack_timeout": 0, 00:19:02.058 "ctrlr_loss_timeout_sec": 0, 00:19:02.058 "reconnect_delay_sec": 0, 00:19:02.058 "fast_io_fail_timeout_sec": 0, 00:19:02.058 "disable_auto_failback": false, 00:19:02.058 "generate_uuids": false, 00:19:02.058 "transport_tos": 0, 00:19:02.058 "nvme_error_stat": false, 00:19:02.058 "rdma_srq_size": 0, 00:19:02.058 "io_path_stat": false, 00:19:02.058 "allow_accel_sequence": false, 00:19:02.058 "rdma_max_cq_size": 0, 00:19:02.058 "rdma_cm_event_timeout_ms": 0, 00:19:02.058 "dhchap_digests": [ 00:19:02.058 "sha256", 00:19:02.058 "sha384", 00:19:02.058 "sha512" 00:19:02.058 ], 00:19:02.058 "dhchap_dhgroups": [ 00:19:02.058 "null", 00:19:02.058 "ffdhe2048", 00:19:02.058 "ffdhe3072", 00:19:02.058 "ffdhe4096", 00:19:02.058 "ffdhe6144", 00:19:02.058 "ffdhe8192" 00:19:02.058 ] 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "bdev_nvme_set_hotplug", 00:19:02.058 "params": { 00:19:02.058 "period_us": 100000, 00:19:02.058 "enable": false 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "bdev_malloc_create", 00:19:02.058 "params": { 00:19:02.058 "name": "malloc0", 00:19:02.058 "num_blocks": 8192, 00:19:02.058 "block_size": 4096, 00:19:02.058 "physical_block_size": 4096, 00:19:02.058 "uuid": "bca5b199-c232-4249-b7ee-b2a19088b37c", 00:19:02.058 "optimal_io_boundary": 0 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "bdev_wait_for_examine" 00:19:02.058 } 00:19:02.058 ] 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "subsystem": "nbd", 00:19:02.058 "config": [] 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "subsystem": "scheduler", 00:19:02.058 "config": [ 00:19:02.058 { 00:19:02.058 "method": "framework_set_scheduler", 00:19:02.058 "params": { 00:19:02.058 "name": "static" 00:19:02.058 } 00:19:02.058 } 00:19:02.058 ] 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "subsystem": "nvmf", 00:19:02.058 "config": [ 00:19:02.058 { 00:19:02.058 "method": "nvmf_set_config", 00:19:02.058 "params": { 00:19:02.058 "discovery_filter": "match_any", 00:19:02.058 "admin_cmd_passthru": { 00:19:02.058 "identify_ctrlr": false 00:19:02.058 } 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "nvmf_set_max_subsystems", 00:19:02.058 "params": { 00:19:02.058 "max_subsystems": 1024 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "nvmf_set_crdt", 00:19:02.058 "params": { 00:19:02.058 "crdt1": 0, 00:19:02.058 "crdt2": 0, 00:19:02.058 "crdt3": 0 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "nvmf_create_transport", 00:19:02.058 "params": { 00:19:02.058 "trtype": "TCP", 00:19:02.058 "max_queue_depth": 128, 00:19:02.058 "max_io_qpairs_per_ctrlr": 127, 00:19:02.058 "in_capsule_data_size": 4096, 00:19:02.058 "max_io_size": 131072, 00:19:02.058 "io_unit_size": 131072, 00:19:02.058 "max_aq_depth": 128, 00:19:02.058 "num_shared_buffers": 511, 00:19:02.058 "buf_cache_size": 4294967295, 00:19:02.058 "dif_insert_or_strip": false, 00:19:02.058 "zcopy": false, 00:19:02.058 "c2h_success": false, 00:19:02.058 "sock_priority": 0, 00:19:02.058 "abort_timeout_sec": 1, 00:19:02.058 "ack_timeout": 0, 00:19:02.058 "data_wr_pool_size": 0 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "nvmf_create_subsystem", 00:19:02.058 "params": { 00:19:02.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.058 "allow_any_host": false, 00:19:02.058 "serial_number": "00000000000000000000", 00:19:02.058 "model_number": "SPDK bdev Controller", 00:19:02.058 "max_namespaces": 32, 00:19:02.058 "min_cntlid": 1, 00:19:02.058 "max_cntlid": 65519, 00:19:02.058 "ana_reporting": false 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "nvmf_subsystem_add_host", 00:19:02.058 "params": { 00:19:02.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.058 "host": "nqn.2016-06.io.spdk:host1", 00:19:02.058 "psk": "key0" 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "nvmf_subsystem_add_ns", 00:19:02.058 "params": { 00:19:02.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.058 "namespace": { 00:19:02.058 "nsid": 1, 00:19:02.058 "bdev_name": "malloc0", 00:19:02.058 "nguid": "BCA5B199C2324249B7EEB2A19088B37C", 00:19:02.058 "uuid": "bca5b199-c232-4249-b7ee-b2a19088b37c", 00:19:02.058 "no_auto_visible": false 00:19:02.058 } 00:19:02.058 } 00:19:02.058 }, 00:19:02.058 { 00:19:02.058 "method": "nvmf_subsystem_add_listener", 00:19:02.058 "params": { 00:19:02.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.058 "listen_address": { 00:19:02.058 "trtype": "TCP", 00:19:02.058 "adrfam": "IPv4", 00:19:02.058 "traddr": "10.0.0.2", 00:19:02.058 "trsvcid": "4420" 00:19:02.058 }, 00:19:02.058 "secure_channel": false, 00:19:02.058 "sock_impl": "ssl" 00:19:02.058 } 00:19:02.058 } 00:19:02.058 ] 00:19:02.058 } 00:19:02.058 ] 00:19:02.058 }' 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2671072 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2671072 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2671072 ']' 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:02.058 00:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.317 [2024-07-16 00:55:36.828769] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:19:02.317 [2024-07-16 00:55:36.828871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.317 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.317 [2024-07-16 00:55:36.897023] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.317 [2024-07-16 00:55:37.010549] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.317 [2024-07-16 00:55:37.010616] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.317 [2024-07-16 00:55:37.010642] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.317 [2024-07-16 00:55:37.010655] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.317 [2024-07-16 00:55:37.010668] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.317 [2024-07-16 00:55:37.010748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.576 [2024-07-16 00:55:37.258320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.576 [2024-07-16 00:55:37.290338] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.576 [2024-07-16 00:55:37.311102] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2671132 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2671132 /var/tmp/bdevperf.sock 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2671132 ']' 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.141 00:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:03.141 "subsystems": [ 00:19:03.141 { 00:19:03.141 "subsystem": "keyring", 00:19:03.141 "config": [ 00:19:03.141 { 00:19:03.141 "method": "keyring_file_add_key", 00:19:03.141 "params": { 00:19:03.141 "name": "key0", 00:19:03.141 "path": "/tmp/tmp.IScbN68pi4" 00:19:03.141 } 00:19:03.141 } 00:19:03.141 ] 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "subsystem": "iobuf", 00:19:03.141 "config": [ 00:19:03.141 { 00:19:03.141 "method": "iobuf_set_options", 00:19:03.141 "params": { 00:19:03.141 "small_pool_count": 8192, 00:19:03.141 "large_pool_count": 1024, 00:19:03.141 "small_bufsize": 8192, 00:19:03.141 "large_bufsize": 135168 00:19:03.141 } 00:19:03.141 } 00:19:03.141 ] 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "subsystem": "sock", 00:19:03.141 "config": [ 00:19:03.141 { 00:19:03.141 "method": "sock_set_default_impl", 00:19:03.141 "params": { 00:19:03.141 "impl_name": "posix" 00:19:03.141 } 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "method": "sock_impl_set_options", 00:19:03.141 "params": { 00:19:03.141 "impl_name": "ssl", 00:19:03.141 "recv_buf_size": 4096, 00:19:03.141 "send_buf_size": 4096, 00:19:03.141 "enable_recv_pipe": true, 00:19:03.141 "enable_quickack": false, 00:19:03.141 "enable_placement_id": 0, 00:19:03.141 "enable_zerocopy_send_server": true, 00:19:03.141 "enable_zerocopy_send_client": false, 00:19:03.141 "zerocopy_threshold": 0, 00:19:03.141 "tls_version": 0, 00:19:03.141 "enable_ktls": false 00:19:03.141 } 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "method": "sock_impl_set_options", 00:19:03.141 "params": { 00:19:03.141 "impl_name": "posix", 00:19:03.141 "recv_buf_size": 2097152, 00:19:03.141 "send_buf_size": 2097152, 00:19:03.141 "enable_recv_pipe": true, 00:19:03.141 "enable_quickack": false, 00:19:03.141 "enable_placement_id": 0, 00:19:03.141 "enable_zerocopy_send_server": true, 00:19:03.141 "enable_zerocopy_send_client": false, 00:19:03.141 "zerocopy_threshold": 0, 00:19:03.141 "tls_version": 0, 00:19:03.141 "enable_ktls": false 00:19:03.141 } 00:19:03.141 } 00:19:03.141 ] 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "subsystem": "vmd", 00:19:03.141 "config": [] 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "subsystem": "accel", 00:19:03.141 "config": [ 00:19:03.141 { 00:19:03.141 "method": "accel_set_options", 00:19:03.141 "params": { 00:19:03.141 "small_cache_size": 128, 00:19:03.141 "large_cache_size": 16, 00:19:03.141 "task_count": 2048, 00:19:03.141 "sequence_count": 2048, 00:19:03.141 "buf_count": 2048 00:19:03.141 } 00:19:03.141 } 00:19:03.141 ] 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "subsystem": "bdev", 00:19:03.141 "config": [ 00:19:03.141 { 00:19:03.141 "method": "bdev_set_options", 00:19:03.141 "params": { 00:19:03.141 "bdev_io_pool_size": 65535, 00:19:03.141 "bdev_io_cache_size": 256, 00:19:03.141 "bdev_auto_examine": true, 00:19:03.141 "iobuf_small_cache_size": 128, 00:19:03.141 "iobuf_large_cache_size": 16 00:19:03.141 } 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "method": "bdev_raid_set_options", 00:19:03.141 "params": { 00:19:03.141 "process_window_size_kb": 1024 00:19:03.141 } 00:19:03.141 }, 00:19:03.141 { 00:19:03.141 "method": "bdev_iscsi_set_options", 00:19:03.141 "params": { 00:19:03.142 "timeout_sec": 30 00:19:03.142 } 00:19:03.142 }, 00:19:03.142 { 00:19:03.142 "method": "bdev_nvme_set_options", 00:19:03.142 "params": { 00:19:03.142 "action_on_timeout": "none", 00:19:03.142 "timeout_us": 0, 00:19:03.142 "timeout_admin_us": 0, 00:19:03.142 "keep_alive_timeout_ms": 10000, 00:19:03.142 "arbitration_burst": 0, 00:19:03.142 "low_priority_weight": 0, 00:19:03.142 "medium_priority_weight": 0, 00:19:03.142 "high_priority_weight": 0, 00:19:03.142 "nvme_adminq_poll_period_us": 10000, 00:19:03.142 "nvme_ioq_poll_period_us": 0, 00:19:03.142 "io_queue_requests": 512, 00:19:03.142 "delay_cmd_submit": true, 00:19:03.142 "transport_retry_count": 4, 00:19:03.142 "bdev_retry_count": 3, 00:19:03.142 "transport_ack_timeout": 0, 00:19:03.142 "ctrlr_loss_timeout_sec": 0, 00:19:03.142 "reconnect_delay_sec": 0, 00:19:03.142 "fast_io_fail_timeout_sec": 0, 00:19:03.142 "disable_auto_failback": false, 00:19:03.142 "generate_uuids": false, 00:19:03.142 "transport_tos": 0, 00:19:03.142 "nvme_error_stat": false, 00:19:03.142 "rdma_srq_size": 0, 00:19:03.142 "io_path_stat": false, 00:19:03.142 "allow_accel_sequence": false, 00:19:03.142 "rdma_max_cq_size": 0, 00:19:03.142 "rdma_cm_event_timeout_ms": 0, 00:19:03.142 "dhchap_digests": [ 00:19:03.142 "sha256", 00:19:03.142 "sha384", 00:19:03.142 "sha512" 00:19:03.142 ], 00:19:03.142 "dhchap_dhgroups": [ 00:19:03.142 "null", 00:19:03.142 "ffdhe2048", 00:19:03.142 "ffdhe3072", 00:19:03.142 "ffdhe4096", 00:19:03.142 "ffdhe6144", 00:19:03.142 "ffdhe8192" 00:19:03.142 ] 00:19:03.142 } 00:19:03.142 }, 00:19:03.142 { 00:19:03.142 "method": "bdev_nvme_attach_controller", 00:19:03.142 "params": { 00:19:03.142 "name": "nvme0", 00:19:03.142 "trtype": "TCP", 00:19:03.142 "adrfam": "IPv4", 00:19:03.142 "traddr": "10.0.0.2", 00:19:03.142 "trsvcid": "4420", 00:19:03.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.142 "prchk_reftag": false, 00:19:03.142 "prchk_guard": false, 00:19:03.142 "ctrlr_loss_timeout_sec": 0, 00:19:03.142 "reconnect_delay_sec": 0, 00:19:03.142 "fast_io_fail_timeout_sec": 0, 00:19:03.142 "psk": "key0", 00:19:03.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.142 "hdgst": false, 00:19:03.142 "ddgst": false 00:19:03.142 } 00:19:03.142 }, 00:19:03.142 { 00:19:03.142 "method": "bdev_nvme_set_hotplug", 00:19:03.142 "params": { 00:19:03.142 "period_us": 100000, 00:19:03.142 "enable": false 00:19:03.142 } 00:19:03.142 }, 00:19:03.142 { 00:19:03.142 "method": "bdev_enable_histogram", 00:19:03.142 "params": { 00:19:03.142 "name": "nvme0n1", 00:19:03.142 "enable": true 00:19:03.142 } 00:19:03.142 }, 00:19:03.142 { 00:19:03.142 "method": "bdev_wait_for_examine" 00:19:03.142 } 00:19:03.142 ] 00:19:03.142 }, 00:19:03.142 { 00:19:03.142 "subsystem": "nbd", 00:19:03.142 "config": [] 00:19:03.142 } 00:19:03.142 ] 00:19:03.142 }' 00:19:03.142 [2024-07-16 00:55:37.856707] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:19:03.142 [2024-07-16 00:55:37.856793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671132 ] 00:19:03.142 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.399 [2024-07-16 00:55:37.925075] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.400 [2024-07-16 00:55:38.042648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.657 [2024-07-16 00:55:38.220857] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.221 00:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.221 00:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:04.221 00:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:04.221 00:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:04.477 00:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.477 00:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:04.477 Running I/O for 1 seconds... 00:19:05.886 00:19:05.886 Latency(us) 00:19:05.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.886 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:05.886 Verification LBA range: start 0x0 length 0x2000 00:19:05.886 nvme0n1 : 1.07 1559.32 6.09 0.00 0.00 79887.99 11990.66 107964.49 00:19:05.886 =================================================================================================================== 00:19:05.886 Total : 1559.32 6.09 0.00 0.00 79887.99 11990.66 107964.49 00:19:05.886 0 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:05.886 nvmf_trace.0 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2671132 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2671132 ']' 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2671132 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671132 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671132' 00:19:05.886 killing process with pid 2671132 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2671132 00:19:05.886 Received shutdown signal, test time was about 1.000000 seconds 00:19:05.886 00:19:05.886 Latency(us) 00:19:05.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.886 =================================================================================================================== 00:19:05.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2671132 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.886 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.886 rmmod nvme_tcp 00:19:05.886 rmmod nvme_fabrics 00:19:06.142 rmmod nvme_keyring 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2671072 ']' 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2671072 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2671072 ']' 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2671072 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671072 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671072' 00:19:06.142 killing process with pid 2671072 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2671072 00:19:06.142 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2671072 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.403 00:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.303 00:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.303 00:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.a1P3JgukTd /tmp/tmp.28pvLFjeNL /tmp/tmp.IScbN68pi4 00:19:08.303 00:19:08.303 real 1m23.385s 00:19:08.303 user 2m11.098s 00:19:08.303 sys 0m29.149s 00:19:08.303 00:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.303 00:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.303 ************************************ 00:19:08.303 END TEST nvmf_tls 00:19:08.303 ************************************ 00:19:08.562 00:55:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:08.562 00:55:43 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:08.562 00:55:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:08.562 00:55:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.562 00:55:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.562 ************************************ 00:19:08.562 START TEST nvmf_fips 00:19:08.562 ************************************ 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:08.562 * Looking for test storage... 00:19:08.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.562 00:55:43 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:08.563 Error setting digest 00:19:08.563 0052CB12A07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:08.563 0052CB12A07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.563 00:55:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:10.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:10.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:10.463 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:10.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:10.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:10.464 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:10.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:19:10.722 00:19:10.722 --- 10.0.0.2 ping statistics --- 00:19:10.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.722 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:19:10.722 00:19:10.722 --- 10.0.0.1 ping statistics --- 00:19:10.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.722 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2673484 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2673484 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2673484 ']' 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.722 00:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.722 [2024-07-16 00:55:45.362622] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:19:10.722 [2024-07-16 00:55:45.362727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.722 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.722 [2024-07-16 00:55:45.432386] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.981 [2024-07-16 00:55:45.547480] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.981 [2024-07-16 00:55:45.547553] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.981 [2024-07-16 00:55:45.547570] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.981 [2024-07-16 00:55:45.547583] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.981 [2024-07-16 00:55:45.547594] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.981 [2024-07-16 00:55:45.547625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.581 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.581 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:11.581 00:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.581 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.581 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:11.839 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.839 [2024-07-16 00:55:46.564160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.839 [2024-07-16 00:55:46.580144] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.839 [2024-07-16 00:55:46.580370] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.097 [2024-07-16 00:55:46.611667] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:12.097 malloc0 00:19:12.097 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.097 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2673647 00:19:12.097 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.098 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2673647 /var/tmp/bdevperf.sock 00:19:12.098 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2673647 ']' 00:19:12.098 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.098 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.098 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.098 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.098 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.098 [2024-07-16 00:55:46.696797] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:19:12.098 [2024-07-16 00:55:46.696898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673647 ] 00:19:12.098 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.098 [2024-07-16 00:55:46.754670] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.356 [2024-07-16 00:55:46.862795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.356 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.356 00:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:12.356 00:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:12.615 [2024-07-16 00:55:47.188347] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.615 [2024-07-16 00:55:47.188472] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:12.615 TLSTESTn1 00:19:12.615 00:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.873 Running I/O for 10 seconds... 00:19:22.855 00:19:22.855 Latency(us) 00:19:22.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.855 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:22.855 Verification LBA range: start 0x0 length 0x2000 00:19:22.855 TLSTESTn1 : 10.06 1812.93 7.08 0.00 0.00 70394.98 6262.33 105634.32 00:19:22.855 =================================================================================================================== 00:19:22.855 Total : 1812.93 7.08 0.00 0.00 70394.98 6262.33 105634.32 00:19:22.855 0 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:22.855 nvmf_trace.0 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2673647 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2673647 ']' 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2673647 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2673647 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2673647' 00:19:22.855 killing process with pid 2673647 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2673647 00:19:22.855 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.855 00:19:22.855 Latency(us) 00:19:22.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.855 =================================================================================================================== 00:19:22.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.855 [2024-07-16 00:55:57.582135] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:22.855 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2673647 00:19:23.114 00:55:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:23.114 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.114 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:23.114 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.114 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:23.114 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.114 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.114 rmmod nvme_tcp 00:19:23.114 rmmod nvme_fabrics 00:19:23.374 rmmod nvme_keyring 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2673484 ']' 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2673484 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2673484 ']' 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2673484 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2673484 00:19:23.374 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:23.375 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:23.375 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2673484' 00:19:23.375 killing process with pid 2673484 00:19:23.375 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2673484 00:19:23.375 [2024-07-16 00:55:57.925528] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:23.375 00:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2673484 00:19:23.633 00:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.634 00:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.634 00:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.634 00:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.634 00:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.634 00:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.634 00:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.634 00:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.559 00:56:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.559 00:56:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:25.559 00:19:25.559 real 0m17.177s 00:19:25.559 user 0m21.054s 00:19:25.559 sys 0m6.734s 00:19:25.559 00:56:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.559 00:56:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.559 ************************************ 00:19:25.559 END TEST nvmf_fips 00:19:25.559 ************************************ 00:19:25.559 00:56:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:25.559 00:56:00 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:25.559 00:56:00 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:25.559 00:56:00 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:25.559 00:56:00 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:25.559 00:56:00 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.559 00:56:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:28.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:28.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.090 00:56:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:28.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:28.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:28.091 00:56:02 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:28.091 00:56:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:28.091 00:56:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.091 00:56:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.091 ************************************ 00:19:28.091 START TEST nvmf_perf_adq 00:19:28.091 ************************************ 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:28.091 * Looking for test storage... 00:19:28.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.091 00:56:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:29.472 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:29.473 00:56:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:30.040 00:56:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:31.943 00:56:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:37.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:37.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.218 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:37.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:37.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:37.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:37.219 00:19:37.219 --- 10.0.0.2 ping statistics --- 00:19:37.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.219 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:19:37.219 00:19:37.219 --- 10.0.0.1 ping statistics --- 00:19:37.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.219 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2679387 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2679387 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2679387 ']' 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.219 00:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.219 [2024-07-16 00:56:11.905270] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:19:37.219 [2024-07-16 00:56:11.905357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.219 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.219 [2024-07-16 00:56:11.973572] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.478 [2024-07-16 00:56:12.095812] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.478 [2024-07-16 00:56:12.095870] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.478 [2024-07-16 00:56:12.095896] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.478 [2024-07-16 00:56:12.095910] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.478 [2024-07-16 00:56:12.095938] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.478 [2024-07-16 00:56:12.096009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.478 [2024-07-16 00:56:12.096039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.478 [2024-07-16 00:56:12.096069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.478 [2024-07-16 00:56:12.096071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 [2024-07-16 00:56:13.025957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 Malloc1 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 [2024-07-16 00:56:13.079308] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2679546 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:38.473 00:56:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:38.473 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.386 00:56:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:40.386 00:56:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.386 00:56:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.386 00:56:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.386 00:56:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:40.386 "tick_rate": 2700000000, 00:19:40.386 "poll_groups": [ 00:19:40.386 { 00:19:40.386 "name": "nvmf_tgt_poll_group_000", 00:19:40.386 "admin_qpairs": 1, 00:19:40.386 "io_qpairs": 1, 00:19:40.386 "current_admin_qpairs": 1, 00:19:40.386 "current_io_qpairs": 1, 00:19:40.386 "pending_bdev_io": 0, 00:19:40.386 "completed_nvme_io": 18608, 00:19:40.386 "transports": [ 00:19:40.386 { 00:19:40.386 "trtype": "TCP" 00:19:40.386 } 00:19:40.386 ] 00:19:40.386 }, 00:19:40.386 { 00:19:40.386 "name": "nvmf_tgt_poll_group_001", 00:19:40.386 "admin_qpairs": 0, 00:19:40.386 "io_qpairs": 1, 00:19:40.386 "current_admin_qpairs": 0, 00:19:40.386 "current_io_qpairs": 1, 00:19:40.386 "pending_bdev_io": 0, 00:19:40.386 "completed_nvme_io": 20484, 00:19:40.386 "transports": [ 00:19:40.386 { 00:19:40.386 "trtype": "TCP" 00:19:40.386 } 00:19:40.386 ] 00:19:40.386 }, 00:19:40.386 { 00:19:40.386 "name": "nvmf_tgt_poll_group_002", 00:19:40.386 "admin_qpairs": 0, 00:19:40.386 "io_qpairs": 1, 00:19:40.386 "current_admin_qpairs": 0, 00:19:40.386 "current_io_qpairs": 1, 00:19:40.386 "pending_bdev_io": 0, 00:19:40.386 "completed_nvme_io": 19784, 00:19:40.386 "transports": [ 00:19:40.386 { 00:19:40.386 "trtype": "TCP" 00:19:40.386 } 00:19:40.386 ] 00:19:40.386 }, 00:19:40.386 { 00:19:40.386 "name": "nvmf_tgt_poll_group_003", 00:19:40.386 "admin_qpairs": 0, 00:19:40.386 "io_qpairs": 1, 00:19:40.386 "current_admin_qpairs": 0, 00:19:40.386 "current_io_qpairs": 1, 00:19:40.386 "pending_bdev_io": 0, 00:19:40.386 "completed_nvme_io": 20336, 00:19:40.386 "transports": [ 00:19:40.386 { 00:19:40.386 "trtype": "TCP" 00:19:40.386 } 00:19:40.386 ] 00:19:40.386 } 00:19:40.386 ] 00:19:40.386 }' 00:19:40.386 00:56:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:40.386 00:56:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:40.644 00:56:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:40.644 00:56:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:40.644 00:56:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2679546 00:19:48.763 Initializing NVMe Controllers 00:19:48.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:48.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:48.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:48.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:48.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:48.763 Initialization complete. Launching workers. 00:19:48.763 ======================================================== 00:19:48.763 Latency(us) 00:19:48.763 Device Information : IOPS MiB/s Average min max 00:19:48.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10692.40 41.77 5987.53 2253.49 9547.93 00:19:48.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10747.10 41.98 5956.80 1606.80 9528.64 00:19:48.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10335.90 40.37 6192.75 1421.80 10326.87 00:19:48.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9742.50 38.06 6568.70 2091.18 10194.10 00:19:48.763 ======================================================== 00:19:48.763 Total : 41517.90 162.18 6167.04 1421.80 10326.87 00:19:48.763 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:48.763 rmmod nvme_tcp 00:19:48.763 rmmod nvme_fabrics 00:19:48.763 rmmod nvme_keyring 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2679387 ']' 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2679387 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2679387 ']' 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2679387 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2679387 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2679387' 00:19:48.763 killing process with pid 2679387 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2679387 00:19:48.763 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2679387 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.023 00:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.932 00:56:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.932 00:56:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:50.932 00:56:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:51.500 00:56:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:54.031 00:56:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.297 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:19:59.298 00:19:59.298 --- 10.0.0.2 ping statistics --- 00:19:59.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.298 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:19:59.298 00:19:59.298 --- 10.0.0.1 ping statistics --- 00:19:59.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.298 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:59.298 net.core.busy_poll = 1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:59.298 net.core.busy_read = 1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2682186 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2682186 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2682186 ']' 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.298 00:56:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.298 [2024-07-16 00:56:33.580721] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:19:59.298 [2024-07-16 00:56:33.580802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.298 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.298 [2024-07-16 00:56:33.655458] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.298 [2024-07-16 00:56:33.772223] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.298 [2024-07-16 00:56:33.772302] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.298 [2024-07-16 00:56:33.772317] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.298 [2024-07-16 00:56:33.772330] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.298 [2024-07-16 00:56:33.772342] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.298 [2024-07-16 00:56:33.772436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.298 [2024-07-16 00:56:33.772509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.298 [2024-07-16 00:56:33.772599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.298 [2024-07-16 00:56:33.772601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:59.862 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:59.863 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.863 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:59.863 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.863 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.120 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.121 [2024-07-16 00:56:34.754617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.121 Malloc1 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.121 [2024-07-16 00:56:34.806323] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2682435 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:00.121 00:56:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:00.121 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.652 00:56:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:02.652 00:56:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.652 00:56:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.652 00:56:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.652 00:56:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:02.652 "tick_rate": 2700000000, 00:20:02.652 "poll_groups": [ 00:20:02.652 { 00:20:02.652 "name": "nvmf_tgt_poll_group_000", 00:20:02.652 "admin_qpairs": 1, 00:20:02.652 "io_qpairs": 1, 00:20:02.652 "current_admin_qpairs": 1, 00:20:02.652 "current_io_qpairs": 1, 00:20:02.652 "pending_bdev_io": 0, 00:20:02.652 "completed_nvme_io": 21478, 00:20:02.652 "transports": [ 00:20:02.652 { 00:20:02.652 "trtype": "TCP" 00:20:02.652 } 00:20:02.652 ] 00:20:02.652 }, 00:20:02.652 { 00:20:02.652 "name": "nvmf_tgt_poll_group_001", 00:20:02.652 "admin_qpairs": 0, 00:20:02.652 "io_qpairs": 3, 00:20:02.652 "current_admin_qpairs": 0, 00:20:02.652 "current_io_qpairs": 3, 00:20:02.652 "pending_bdev_io": 0, 00:20:02.652 "completed_nvme_io": 27643, 00:20:02.652 "transports": [ 00:20:02.652 { 00:20:02.652 "trtype": "TCP" 00:20:02.652 } 00:20:02.652 ] 00:20:02.652 }, 00:20:02.652 { 00:20:02.652 "name": "nvmf_tgt_poll_group_002", 00:20:02.652 "admin_qpairs": 0, 00:20:02.652 "io_qpairs": 0, 00:20:02.652 "current_admin_qpairs": 0, 00:20:02.652 "current_io_qpairs": 0, 00:20:02.652 "pending_bdev_io": 0, 00:20:02.652 "completed_nvme_io": 0, 00:20:02.652 "transports": [ 00:20:02.653 { 00:20:02.653 "trtype": "TCP" 00:20:02.653 } 00:20:02.653 ] 00:20:02.653 }, 00:20:02.653 { 00:20:02.653 "name": "nvmf_tgt_poll_group_003", 00:20:02.653 "admin_qpairs": 0, 00:20:02.653 "io_qpairs": 0, 00:20:02.653 "current_admin_qpairs": 0, 00:20:02.653 "current_io_qpairs": 0, 00:20:02.653 "pending_bdev_io": 0, 00:20:02.653 "completed_nvme_io": 0, 00:20:02.653 "transports": [ 00:20:02.653 { 00:20:02.653 "trtype": "TCP" 00:20:02.653 } 00:20:02.653 ] 00:20:02.653 } 00:20:02.653 ] 00:20:02.653 }' 00:20:02.653 00:56:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:02.653 00:56:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:02.653 00:56:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:02.653 00:56:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:02.653 00:56:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2682435 00:20:10.802 Initializing NVMe Controllers 00:20:10.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:10.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:10.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:10.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:10.802 Initialization complete. Launching workers. 00:20:10.802 ======================================================== 00:20:10.802 Latency(us) 00:20:10.802 Device Information : IOPS MiB/s Average min max 00:20:10.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4884.80 19.08 13104.66 1934.56 61439.11 00:20:10.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4662.50 18.21 13735.08 1906.59 61306.62 00:20:10.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4924.60 19.24 13003.64 2273.63 61237.94 00:20:10.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11264.80 44.00 5699.62 1522.28 47207.73 00:20:10.803 ======================================================== 00:20:10.803 Total : 25736.70 100.53 9958.40 1522.28 61439.11 00:20:10.803 00:20:10.803 00:56:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:10.803 00:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.803 00:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:10.803 00:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.803 00:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:10.803 00:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.803 00:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.803 rmmod nvme_tcp 00:20:10.803 rmmod nvme_fabrics 00:20:10.803 rmmod nvme_keyring 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2682186 ']' 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2682186 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2682186 ']' 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2682186 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2682186 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2682186' 00:20:10.803 killing process with pid 2682186 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2682186 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2682186 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.803 00:56:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.095 00:56:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:14.095 00:56:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:14.095 00:20:14.095 real 0m46.074s 00:20:14.095 user 2m39.223s 00:20:14.095 sys 0m12.101s 00:20:14.095 00:56:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.095 00:56:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.095 ************************************ 00:20:14.095 END TEST nvmf_perf_adq 00:20:14.095 ************************************ 00:20:14.095 00:56:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:14.095 00:56:48 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:14.095 00:56:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:14.095 00:56:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.095 00:56:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:14.095 ************************************ 00:20:14.095 START TEST nvmf_shutdown 00:20:14.095 ************************************ 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:14.095 * Looking for test storage... 00:20:14.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.095 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:14.096 ************************************ 00:20:14.096 START TEST nvmf_shutdown_tc1 00:20:14.096 ************************************ 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.096 00:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:16.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:16.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:16.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:16.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.002 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:20:16.003 00:20:16.003 --- 10.0.0.2 ping statistics --- 00:20:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.003 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:20:16.003 00:20:16.003 --- 10.0.0.1 ping statistics --- 00:20:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.003 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2685682 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2685682 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2685682 ']' 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.003 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.003 [2024-07-16 00:56:50.602871] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:16.003 [2024-07-16 00:56:50.602999] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.003 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.003 [2024-07-16 00:56:50.670828] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.262 [2024-07-16 00:56:50.785267] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.262 [2024-07-16 00:56:50.785318] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.262 [2024-07-16 00:56:50.785347] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.262 [2024-07-16 00:56:50.785359] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.262 [2024-07-16 00:56:50.785369] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.262 [2024-07-16 00:56:50.785465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.262 [2024-07-16 00:56:50.785520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.262 [2024-07-16 00:56:50.785568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:16.262 [2024-07-16 00:56:50.785571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.262 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.262 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:16.262 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.262 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.262 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.263 [2024-07-16 00:56:50.955840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.263 00:56:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.523 Malloc1 00:20:16.523 [2024-07-16 00:56:51.041622] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.523 Malloc2 00:20:16.523 Malloc3 00:20:16.523 Malloc4 00:20:16.523 Malloc5 00:20:16.523 Malloc6 00:20:16.781 Malloc7 00:20:16.781 Malloc8 00:20:16.781 Malloc9 00:20:16.781 Malloc10 00:20:16.781 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2685788 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2685788 /var/tmp/bdevperf.sock 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2685788 ']' 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.782 { 00:20:16.782 "params": { 00:20:16.782 "name": "Nvme$subsystem", 00:20:16.782 "trtype": "$TEST_TRANSPORT", 00:20:16.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.782 "adrfam": "ipv4", 00:20:16.782 "trsvcid": "$NVMF_PORT", 00:20:16.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.782 "hdgst": ${hdgst:-false}, 00:20:16.782 "ddgst": ${ddgst:-false} 00:20:16.782 }, 00:20:16.782 "method": "bdev_nvme_attach_controller" 00:20:16.782 } 00:20:16.782 EOF 00:20:16.782 )") 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.782 { 00:20:16.782 "params": { 00:20:16.782 "name": "Nvme$subsystem", 00:20:16.782 "trtype": "$TEST_TRANSPORT", 00:20:16.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.782 "adrfam": "ipv4", 00:20:16.782 "trsvcid": "$NVMF_PORT", 00:20:16.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.782 "hdgst": ${hdgst:-false}, 00:20:16.782 "ddgst": ${ddgst:-false} 00:20:16.782 }, 00:20:16.782 "method": "bdev_nvme_attach_controller" 00:20:16.782 } 00:20:16.782 EOF 00:20:16.782 )") 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.782 { 00:20:16.782 "params": { 00:20:16.782 "name": "Nvme$subsystem", 00:20:16.782 "trtype": "$TEST_TRANSPORT", 00:20:16.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.782 "adrfam": "ipv4", 00:20:16.782 "trsvcid": "$NVMF_PORT", 00:20:16.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.782 "hdgst": ${hdgst:-false}, 00:20:16.782 "ddgst": ${ddgst:-false} 00:20:16.782 }, 00:20:16.782 "method": "bdev_nvme_attach_controller" 00:20:16.782 } 00:20:16.782 EOF 00:20:16.782 )") 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.782 { 00:20:16.782 "params": { 00:20:16.782 "name": "Nvme$subsystem", 00:20:16.782 "trtype": "$TEST_TRANSPORT", 00:20:16.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.782 "adrfam": "ipv4", 00:20:16.782 "trsvcid": "$NVMF_PORT", 00:20:16.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.782 "hdgst": ${hdgst:-false}, 00:20:16.782 "ddgst": ${ddgst:-false} 00:20:16.782 }, 00:20:16.782 "method": "bdev_nvme_attach_controller" 00:20:16.782 } 00:20:16.782 EOF 00:20:16.782 )") 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.782 { 00:20:16.782 "params": { 00:20:16.782 "name": "Nvme$subsystem", 00:20:16.782 "trtype": "$TEST_TRANSPORT", 00:20:16.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.782 "adrfam": "ipv4", 00:20:16.782 "trsvcid": "$NVMF_PORT", 00:20:16.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.782 "hdgst": ${hdgst:-false}, 00:20:16.782 "ddgst": ${ddgst:-false} 00:20:16.782 }, 00:20:16.782 "method": "bdev_nvme_attach_controller" 00:20:16.782 } 00:20:16.782 EOF 00:20:16.782 )") 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.782 { 00:20:16.782 "params": { 00:20:16.782 "name": "Nvme$subsystem", 00:20:16.782 "trtype": "$TEST_TRANSPORT", 00:20:16.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.782 "adrfam": "ipv4", 00:20:16.782 "trsvcid": "$NVMF_PORT", 00:20:16.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.782 "hdgst": ${hdgst:-false}, 00:20:16.782 "ddgst": ${ddgst:-false} 00:20:16.782 }, 00:20:16.782 "method": "bdev_nvme_attach_controller" 00:20:16.782 } 00:20:16.782 EOF 00:20:16.782 )") 00:20:16.782 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:17.042 { 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme$subsystem", 00:20:17.042 "trtype": "$TEST_TRANSPORT", 00:20:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "$NVMF_PORT", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.042 "hdgst": ${hdgst:-false}, 00:20:17.042 "ddgst": ${ddgst:-false} 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 } 00:20:17.042 EOF 00:20:17.042 )") 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:17.042 { 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme$subsystem", 00:20:17.042 "trtype": "$TEST_TRANSPORT", 00:20:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "$NVMF_PORT", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.042 "hdgst": ${hdgst:-false}, 00:20:17.042 "ddgst": ${ddgst:-false} 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 } 00:20:17.042 EOF 00:20:17.042 )") 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:17.042 { 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme$subsystem", 00:20:17.042 "trtype": "$TEST_TRANSPORT", 00:20:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "$NVMF_PORT", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.042 "hdgst": ${hdgst:-false}, 00:20:17.042 "ddgst": ${ddgst:-false} 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 } 00:20:17.042 EOF 00:20:17.042 )") 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:17.042 { 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme$subsystem", 00:20:17.042 "trtype": "$TEST_TRANSPORT", 00:20:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "$NVMF_PORT", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.042 "hdgst": ${hdgst:-false}, 00:20:17.042 "ddgst": ${ddgst:-false} 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 } 00:20:17.042 EOF 00:20:17.042 )") 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:17.042 00:56:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme1", 00:20:17.042 "trtype": "tcp", 00:20:17.042 "traddr": "10.0.0.2", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "4420", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.042 "hdgst": false, 00:20:17.042 "ddgst": false 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 },{ 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme2", 00:20:17.042 "trtype": "tcp", 00:20:17.042 "traddr": "10.0.0.2", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "4420", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:17.042 "hdgst": false, 00:20:17.042 "ddgst": false 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 },{ 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme3", 00:20:17.042 "trtype": "tcp", 00:20:17.042 "traddr": "10.0.0.2", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "4420", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:17.042 "hdgst": false, 00:20:17.042 "ddgst": false 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 },{ 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme4", 00:20:17.042 "trtype": "tcp", 00:20:17.042 "traddr": "10.0.0.2", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "4420", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:17.042 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:17.042 "hdgst": false, 00:20:17.042 "ddgst": false 00:20:17.042 }, 00:20:17.042 "method": "bdev_nvme_attach_controller" 00:20:17.042 },{ 00:20:17.042 "params": { 00:20:17.042 "name": "Nvme5", 00:20:17.042 "trtype": "tcp", 00:20:17.042 "traddr": "10.0.0.2", 00:20:17.042 "adrfam": "ipv4", 00:20:17.042 "trsvcid": "4420", 00:20:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:17.043 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:17.043 "hdgst": false, 00:20:17.043 "ddgst": false 00:20:17.043 }, 00:20:17.043 "method": "bdev_nvme_attach_controller" 00:20:17.043 },{ 00:20:17.043 "params": { 00:20:17.043 "name": "Nvme6", 00:20:17.043 "trtype": "tcp", 00:20:17.043 "traddr": "10.0.0.2", 00:20:17.043 "adrfam": "ipv4", 00:20:17.043 "trsvcid": "4420", 00:20:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:17.043 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:17.043 "hdgst": false, 00:20:17.043 "ddgst": false 00:20:17.043 }, 00:20:17.043 "method": "bdev_nvme_attach_controller" 00:20:17.043 },{ 00:20:17.043 "params": { 00:20:17.043 "name": "Nvme7", 00:20:17.043 "trtype": "tcp", 00:20:17.043 "traddr": "10.0.0.2", 00:20:17.043 "adrfam": "ipv4", 00:20:17.043 "trsvcid": "4420", 00:20:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:17.043 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:17.043 "hdgst": false, 00:20:17.043 "ddgst": false 00:20:17.043 }, 00:20:17.043 "method": "bdev_nvme_attach_controller" 00:20:17.043 },{ 00:20:17.043 "params": { 00:20:17.043 "name": "Nvme8", 00:20:17.043 "trtype": "tcp", 00:20:17.043 "traddr": "10.0.0.2", 00:20:17.043 "adrfam": "ipv4", 00:20:17.043 "trsvcid": "4420", 00:20:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:17.043 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:17.043 "hdgst": false, 00:20:17.043 "ddgst": false 00:20:17.043 }, 00:20:17.043 "method": "bdev_nvme_attach_controller" 00:20:17.043 },{ 00:20:17.043 "params": { 00:20:17.043 "name": "Nvme9", 00:20:17.043 "trtype": "tcp", 00:20:17.043 "traddr": "10.0.0.2", 00:20:17.043 "adrfam": "ipv4", 00:20:17.043 "trsvcid": "4420", 00:20:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:17.043 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:17.043 "hdgst": false, 00:20:17.043 "ddgst": false 00:20:17.043 }, 00:20:17.043 "method": "bdev_nvme_attach_controller" 00:20:17.043 },{ 00:20:17.043 "params": { 00:20:17.043 "name": "Nvme10", 00:20:17.043 "trtype": "tcp", 00:20:17.043 "traddr": "10.0.0.2", 00:20:17.043 "adrfam": "ipv4", 00:20:17.043 "trsvcid": "4420", 00:20:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:17.043 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:17.043 "hdgst": false, 00:20:17.043 "ddgst": false 00:20:17.043 }, 00:20:17.043 "method": "bdev_nvme_attach_controller" 00:20:17.043 }' 00:20:17.043 [2024-07-16 00:56:51.563833] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:17.043 [2024-07-16 00:56:51.563944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:17.043 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.043 [2024-07-16 00:56:51.628095] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.043 [2024-07-16 00:56:51.740438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2685788 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:18.945 00:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:19.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2685788 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2685682 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.879 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "$NVMF_PORT", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.880 "hdgst": ${hdgst:-false}, 00:20:19.880 "ddgst": ${ddgst:-false} 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 } 00:20:19.880 EOF 00:20:19.880 )") 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.880 { 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme$subsystem", 00:20:19.880 "trtype": "$TEST_TRANSPORT", 00:20:19.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "$NVMF_PORT", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.880 "hdgst": ${hdgst:-false}, 00:20:19.880 "ddgst": ${ddgst:-false} 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 } 00:20:19.880 EOF 00:20:19.880 )") 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.880 { 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme$subsystem", 00:20:19.880 "trtype": "$TEST_TRANSPORT", 00:20:19.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "$NVMF_PORT", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.880 "hdgst": ${hdgst:-false}, 00:20:19.880 "ddgst": ${ddgst:-false} 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 } 00:20:19.880 EOF 00:20:19.880 )") 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:19.880 00:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme1", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme2", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme3", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme4", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme5", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme6", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme7", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme8", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme9", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 },{ 00:20:19.880 "params": { 00:20:19.880 "name": "Nvme10", 00:20:19.880 "trtype": "tcp", 00:20:19.880 "traddr": "10.0.0.2", 00:20:19.880 "adrfam": "ipv4", 00:20:19.880 "trsvcid": "4420", 00:20:19.880 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:19.880 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:19.880 "hdgst": false, 00:20:19.880 "ddgst": false 00:20:19.880 }, 00:20:19.880 "method": "bdev_nvme_attach_controller" 00:20:19.880 }' 00:20:19.880 [2024-07-16 00:56:54.622686] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:19.880 [2024-07-16 00:56:54.622774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686211 ] 00:20:20.138 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.138 [2024-07-16 00:56:54.688365] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.138 [2024-07-16 00:56:54.802188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.035 Running I/O for 1 seconds... 00:20:22.968 00:20:22.968 Latency(us) 00:20:22.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.968 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme1n1 : 1.14 223.75 13.98 0.00 0.00 283210.90 22427.88 259425.47 00:20:22.968 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme2n1 : 1.18 217.21 13.58 0.00 0.00 287136.81 22524.97 256318.58 00:20:22.968 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme3n1 : 1.17 219.11 13.69 0.00 0.00 280210.20 21359.88 254765.13 00:20:22.968 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme4n1 : 1.18 270.38 16.90 0.00 0.00 223509.01 15437.37 251658.24 00:20:22.968 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme5n1 : 1.19 215.18 13.45 0.00 0.00 276173.56 22719.15 276513.37 00:20:22.968 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme6n1 : 1.13 228.96 14.31 0.00 0.00 252940.52 3301.07 248551.35 00:20:22.968 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme7n1 : 1.19 268.06 16.75 0.00 0.00 213309.90 7039.05 251658.24 00:20:22.968 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme8n1 : 1.14 225.06 14.07 0.00 0.00 249462.71 22622.06 237677.23 00:20:22.968 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme9n1 : 1.20 213.32 13.33 0.00 0.00 260838.40 22622.06 288940.94 00:20:22.968 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.968 Verification LBA range: start 0x0 length 0x400 00:20:22.968 Nvme10n1 : 1.21 264.70 16.54 0.00 0.00 206360.50 10728.49 245444.46 00:20:22.968 =================================================================================================================== 00:20:22.968 Total : 2345.73 146.61 0.00 0.00 250602.30 3301.07 288940.94 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.227 rmmod nvme_tcp 00:20:23.227 rmmod nvme_fabrics 00:20:23.227 rmmod nvme_keyring 00:20:23.227 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2685682 ']' 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2685682 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2685682 ']' 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2685682 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.485 00:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2685682 00:20:23.485 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:23.485 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:23.485 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2685682' 00:20:23.485 killing process with pid 2685682 00:20:23.485 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2685682 00:20:23.485 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2685682 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.052 00:56:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.963 00:20:25.963 real 0m12.107s 00:20:25.963 user 0m35.755s 00:20:25.963 sys 0m3.281s 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:25.963 ************************************ 00:20:25.963 END TEST nvmf_shutdown_tc1 00:20:25.963 ************************************ 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:25.963 ************************************ 00:20:25.963 START TEST nvmf_shutdown_tc2 00:20:25.963 ************************************ 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.963 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:25.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:25.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:25.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:25.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.964 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.223 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.223 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.223 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.223 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.223 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.223 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.223 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:26.223 00:20:26.223 --- 10.0.0.2 ping statistics --- 00:20:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.224 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:26.224 00:20:26.224 --- 10.0.0.1 ping statistics --- 00:20:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.224 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2687150 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2687150 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2687150 ']' 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.224 00:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.224 [2024-07-16 00:57:00.878217] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:26.224 [2024-07-16 00:57:00.878299] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.224 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.224 [2024-07-16 00:57:00.947046] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.483 [2024-07-16 00:57:01.069021] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.483 [2024-07-16 00:57:01.069075] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.483 [2024-07-16 00:57:01.069090] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.483 [2024-07-16 00:57:01.069102] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.483 [2024-07-16 00:57:01.069113] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.483 [2024-07-16 00:57:01.069207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.483 [2024-07-16 00:57:01.069304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.483 [2024-07-16 00:57:01.069353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.483 [2024-07-16 00:57:01.069355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.115 [2024-07-16 00:57:01.845807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.115 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.116 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.116 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.116 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.116 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.116 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.116 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.374 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.374 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.374 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.374 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:27.374 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:27.374 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.374 00:57:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.374 Malloc1 00:20:27.374 [2024-07-16 00:57:01.922569] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.374 Malloc2 00:20:27.374 Malloc3 00:20:27.374 Malloc4 00:20:27.374 Malloc5 00:20:27.633 Malloc6 00:20:27.633 Malloc7 00:20:27.633 Malloc8 00:20:27.633 Malloc9 00:20:27.633 Malloc10 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2687403 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2687403 /var/tmp/bdevperf.sock 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2687403 ']' 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.633 { 00:20:27.633 "params": { 00:20:27.633 "name": "Nvme$subsystem", 00:20:27.633 "trtype": "$TEST_TRANSPORT", 00:20:27.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.633 "adrfam": "ipv4", 00:20:27.633 "trsvcid": "$NVMF_PORT", 00:20:27.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.633 "hdgst": ${hdgst:-false}, 00:20:27.633 "ddgst": ${ddgst:-false} 00:20:27.633 }, 00:20:27.633 "method": "bdev_nvme_attach_controller" 00:20:27.633 } 00:20:27.633 EOF 00:20:27.633 )") 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.633 { 00:20:27.633 "params": { 00:20:27.633 "name": "Nvme$subsystem", 00:20:27.633 "trtype": "$TEST_TRANSPORT", 00:20:27.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.633 "adrfam": "ipv4", 00:20:27.633 "trsvcid": "$NVMF_PORT", 00:20:27.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.633 "hdgst": ${hdgst:-false}, 00:20:27.633 "ddgst": ${ddgst:-false} 00:20:27.633 }, 00:20:27.633 "method": "bdev_nvme_attach_controller" 00:20:27.633 } 00:20:27.633 EOF 00:20:27.633 )") 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.633 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.633 { 00:20:27.633 "params": { 00:20:27.633 "name": "Nvme$subsystem", 00:20:27.633 "trtype": "$TEST_TRANSPORT", 00:20:27.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.633 "adrfam": "ipv4", 00:20:27.633 "trsvcid": "$NVMF_PORT", 00:20:27.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.633 "hdgst": ${hdgst:-false}, 00:20:27.633 "ddgst": ${ddgst:-false} 00:20:27.633 }, 00:20:27.633 "method": "bdev_nvme_attach_controller" 00:20:27.633 } 00:20:27.633 EOF 00:20:27.633 )") 00:20:27.891 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.891 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.891 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.891 { 00:20:27.891 "params": { 00:20:27.891 "name": "Nvme$subsystem", 00:20:27.891 "trtype": "$TEST_TRANSPORT", 00:20:27.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.891 "adrfam": "ipv4", 00:20:27.891 "trsvcid": "$NVMF_PORT", 00:20:27.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.891 "hdgst": ${hdgst:-false}, 00:20:27.892 "ddgst": ${ddgst:-false} 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 } 00:20:27.892 EOF 00:20:27.892 )") 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.892 { 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme$subsystem", 00:20:27.892 "trtype": "$TEST_TRANSPORT", 00:20:27.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "$NVMF_PORT", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.892 "hdgst": ${hdgst:-false}, 00:20:27.892 "ddgst": ${ddgst:-false} 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 } 00:20:27.892 EOF 00:20:27.892 )") 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.892 { 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme$subsystem", 00:20:27.892 "trtype": "$TEST_TRANSPORT", 00:20:27.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "$NVMF_PORT", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.892 "hdgst": ${hdgst:-false}, 00:20:27.892 "ddgst": ${ddgst:-false} 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 } 00:20:27.892 EOF 00:20:27.892 )") 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.892 { 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme$subsystem", 00:20:27.892 "trtype": "$TEST_TRANSPORT", 00:20:27.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "$NVMF_PORT", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.892 "hdgst": ${hdgst:-false}, 00:20:27.892 "ddgst": ${ddgst:-false} 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 } 00:20:27.892 EOF 00:20:27.892 )") 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.892 { 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme$subsystem", 00:20:27.892 "trtype": "$TEST_TRANSPORT", 00:20:27.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "$NVMF_PORT", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.892 "hdgst": ${hdgst:-false}, 00:20:27.892 "ddgst": ${ddgst:-false} 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 } 00:20:27.892 EOF 00:20:27.892 )") 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.892 { 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme$subsystem", 00:20:27.892 "trtype": "$TEST_TRANSPORT", 00:20:27.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "$NVMF_PORT", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.892 "hdgst": ${hdgst:-false}, 00:20:27.892 "ddgst": ${ddgst:-false} 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 } 00:20:27.892 EOF 00:20:27.892 )") 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.892 { 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme$subsystem", 00:20:27.892 "trtype": "$TEST_TRANSPORT", 00:20:27.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "$NVMF_PORT", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.892 "hdgst": ${hdgst:-false}, 00:20:27.892 "ddgst": ${ddgst:-false} 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 } 00:20:27.892 EOF 00:20:27.892 )") 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:27.892 00:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme1", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme2", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme3", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme4", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme5", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme6", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme7", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme8", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:27.892 "hdgst": false, 00:20:27.892 "ddgst": false 00:20:27.892 }, 00:20:27.892 "method": "bdev_nvme_attach_controller" 00:20:27.892 },{ 00:20:27.892 "params": { 00:20:27.892 "name": "Nvme9", 00:20:27.892 "trtype": "tcp", 00:20:27.892 "traddr": "10.0.0.2", 00:20:27.892 "adrfam": "ipv4", 00:20:27.892 "trsvcid": "4420", 00:20:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:27.892 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:27.893 "hdgst": false, 00:20:27.893 "ddgst": false 00:20:27.893 }, 00:20:27.893 "method": "bdev_nvme_attach_controller" 00:20:27.893 },{ 00:20:27.893 "params": { 00:20:27.893 "name": "Nvme10", 00:20:27.893 "trtype": "tcp", 00:20:27.893 "traddr": "10.0.0.2", 00:20:27.893 "adrfam": "ipv4", 00:20:27.893 "trsvcid": "4420", 00:20:27.893 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:27.893 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:27.893 "hdgst": false, 00:20:27.893 "ddgst": false 00:20:27.893 }, 00:20:27.893 "method": "bdev_nvme_attach_controller" 00:20:27.893 }' 00:20:27.893 [2024-07-16 00:57:02.424698] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:27.893 [2024-07-16 00:57:02.424774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687403 ] 00:20:27.893 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.893 [2024-07-16 00:57:02.488674] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.893 [2024-07-16 00:57:02.598881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.799 Running I/O for 10 seconds... 00:20:29.799 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.799 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:29.799 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:29.799 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.799 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:30.059 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:30.320 00:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:30.581 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2687403 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2687403 ']' 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2687403 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2687403 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2687403' 00:20:30.842 killing process with pid 2687403 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2687403 00:20:30.842 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2687403 00:20:31.100 Received shutdown signal, test time was about 1.212698 seconds 00:20:31.100 00:20:31.100 Latency(us) 00:20:31.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.101 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme1n1 : 1.21 211.40 13.21 0.00 0.00 299678.72 23204.60 312242.63 00:20:31.101 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme2n1 : 1.17 222.08 13.88 0.00 0.00 280152.58 5971.06 264085.81 00:20:31.101 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme3n1 : 1.21 264.05 16.50 0.00 0.00 231621.29 9514.86 259425.47 00:20:31.101 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme4n1 : 1.18 274.89 17.18 0.00 0.00 219587.40 7378.87 260978.92 00:20:31.101 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme5n1 : 1.20 213.76 13.36 0.00 0.00 278242.23 22427.88 285834.05 00:20:31.101 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme6n1 : 1.18 216.21 13.51 0.00 0.00 270634.29 21262.79 259425.47 00:20:31.101 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme7n1 : 1.19 215.68 13.48 0.00 0.00 266975.76 22524.97 267192.70 00:20:31.101 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme8n1 : 1.16 221.29 13.83 0.00 0.00 255130.55 23787.14 267192.70 00:20:31.101 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme9n1 : 1.21 212.38 13.27 0.00 0.00 262713.27 23981.32 290494.39 00:20:31.101 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.101 Verification LBA range: start 0x0 length 0x400 00:20:31.101 Nvme10n1 : 1.20 213.95 13.37 0.00 0.00 255759.55 25243.50 295154.73 00:20:31.101 =================================================================================================================== 00:20:31.101 Total : 2265.70 141.61 0.00 0.00 260283.09 5971.06 312242.63 00:20:31.360 00:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2687150 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.296 rmmod nvme_tcp 00:20:32.296 rmmod nvme_fabrics 00:20:32.296 rmmod nvme_keyring 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2687150 ']' 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2687150 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2687150 ']' 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2687150 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.296 00:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2687150 00:20:32.296 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:32.296 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:32.296 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2687150' 00:20:32.296 killing process with pid 2687150 00:20:32.296 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2687150 00:20:32.296 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2687150 00:20:32.862 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.863 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.863 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.863 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.863 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.863 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.863 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.863 00:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.400 00:20:35.400 real 0m8.914s 00:20:35.400 user 0m28.353s 00:20:35.400 sys 0m1.764s 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.400 ************************************ 00:20:35.400 END TEST nvmf_shutdown_tc2 00:20:35.400 ************************************ 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:35.400 ************************************ 00:20:35.400 START TEST nvmf_shutdown_tc3 00:20:35.400 ************************************ 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.400 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:35.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:35.401 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:35.401 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:35.401 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:35.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:20:35.401 00:20:35.401 --- 10.0.0.2 ping statistics --- 00:20:35.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.401 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:35.401 00:20:35.401 --- 10.0.0.1 ping statistics --- 00:20:35.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.401 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:35.401 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2688865 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2688865 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2688865 ']' 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.402 00:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:35.402 [2024-07-16 00:57:09.868570] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:35.402 [2024-07-16 00:57:09.868651] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.402 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.402 [2024-07-16 00:57:09.939943] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.402 [2024-07-16 00:57:10.057625] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.402 [2024-07-16 00:57:10.057722] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.402 [2024-07-16 00:57:10.057735] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.402 [2024-07-16 00:57:10.057755] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.402 [2024-07-16 00:57:10.057765] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.402 [2024-07-16 00:57:10.057855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.402 [2024-07-16 00:57:10.057918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.402 [2024-07-16 00:57:10.057986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:35.402 [2024-07-16 00:57:10.057988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.336 [2024-07-16 00:57:10.846009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.336 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.337 00:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.337 Malloc1 00:20:36.337 [2024-07-16 00:57:10.921639] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.337 Malloc2 00:20:36.337 Malloc3 00:20:36.337 Malloc4 00:20:36.337 Malloc5 00:20:36.596 Malloc6 00:20:36.596 Malloc7 00:20:36.596 Malloc8 00:20:36.596 Malloc9 00:20:36.596 Malloc10 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2689128 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2689128 /var/tmp/bdevperf.sock 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2689128 ']' 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.855 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.855 { 00:20:36.855 "params": { 00:20:36.855 "name": "Nvme$subsystem", 00:20:36.855 "trtype": "$TEST_TRANSPORT", 00:20:36.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.855 "adrfam": "ipv4", 00:20:36.855 "trsvcid": "$NVMF_PORT", 00:20:36.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.855 "hdgst": ${hdgst:-false}, 00:20:36.855 "ddgst": ${ddgst:-false} 00:20:36.855 }, 00:20:36.855 "method": "bdev_nvme_attach_controller" 00:20:36.855 } 00:20:36.855 EOF 00:20:36.855 )") 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.856 { 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme$subsystem", 00:20:36.856 "trtype": "$TEST_TRANSPORT", 00:20:36.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "$NVMF_PORT", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.856 "hdgst": ${hdgst:-false}, 00:20:36.856 "ddgst": ${ddgst:-false} 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 } 00:20:36.856 EOF 00:20:36.856 )") 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.856 { 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme$subsystem", 00:20:36.856 "trtype": "$TEST_TRANSPORT", 00:20:36.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "$NVMF_PORT", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.856 "hdgst": ${hdgst:-false}, 00:20:36.856 "ddgst": ${ddgst:-false} 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 } 00:20:36.856 EOF 00:20:36.856 )") 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:36.856 00:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme1", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme2", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme3", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme4", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme5", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme6", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme7", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme8", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme9", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 },{ 00:20:36.856 "params": { 00:20:36.856 "name": "Nvme10", 00:20:36.856 "trtype": "tcp", 00:20:36.856 "traddr": "10.0.0.2", 00:20:36.856 "adrfam": "ipv4", 00:20:36.856 "trsvcid": "4420", 00:20:36.856 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.856 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.856 "hdgst": false, 00:20:36.856 "ddgst": false 00:20:36.856 }, 00:20:36.856 "method": "bdev_nvme_attach_controller" 00:20:36.856 }' 00:20:36.856 [2024-07-16 00:57:11.432773] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:36.856 [2024-07-16 00:57:11.432864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689128 ] 00:20:36.856 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.856 [2024-07-16 00:57:11.495177] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.856 [2024-07-16 00:57:11.605047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.762 Running I/O for 10 seconds... 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:38.762 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:38.763 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:39.021 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:39.021 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:39.021 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:39.021 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:39.021 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.021 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.297 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.297 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2688865 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2688865 ']' 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2688865 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688865 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688865' 00:20:39.298 killing process with pid 2688865 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2688865 00:20:39.298 00:57:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2688865 00:20:39.298 [2024-07-16 00:57:13.843329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.843989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.844261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e5c0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.845684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470540 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.845717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470540 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.845730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470540 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.846901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.846935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.298 [2024-07-16 00:57:13.846948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.846972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.846985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.846997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with [2024-07-16 00:57:13.847361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:20:39.299 id:0 cdw10:00000000 cdw11:00000000 00:20:39.299 [2024-07-16 00:57:13.847396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-16 00:57:13.847409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.299 [2024-07-16 00:57:13.847436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 [2024-07-16 00:57:13.847449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.299 [2024-07-16 00:57:13.847461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 [2024-07-16 00:57:13.847474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.299 [2024-07-16 00:57:13.847487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 [2024-07-16 00:57:13.847499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb9300 is same w[2024-07-16 00:57:13.847511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with ith the state(5) to be set 00:20:39.299 the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-16 00:57:13.847622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:39.299 the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 [2024-07-16 00:57:13.847648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.299 [2024-07-16 00:57:13.847660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 [2024-07-16 00:57:13.847673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.299 [2024-07-16 00:57:13.847686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 [2024-07-16 00:57:13.847699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.299 [2024-07-16 00:57:13.847712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-16 00:57:13.847724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.299 the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with [2024-07-16 00:57:13.847738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eab0 is same wthe state(5) to be set 00:20:39.299 ith the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.847776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eaa0 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.299 [2024-07-16 00:57:13.849710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.849993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.850330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ef80 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.851800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f480 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.300 [2024-07-16 00:57:13.852841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.852999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.853335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5f70 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.854993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.301 [2024-07-16 00:57:13.855304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.855401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6450 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.856988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.857606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f7f0 is same with the state(5) to be set 00:20:39.302 [2024-07-16 00:57:13.858656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.858989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.859538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fcf0 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.303 [2024-07-16 00:57:13.860658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.860996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470060 is same with the state(5) to be set 00:20:39.304 [2024-07-16 00:57:13.861973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.304 [2024-07-16 00:57:13.862808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.304 [2024-07-16 00:57:13.862824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.862838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.862868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.862891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.862908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.862932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.862948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.862962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.862977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.862991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.863985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.305 [2024-07-16 00:57:13.863998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.305 [2024-07-16 00:57:13.864018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.306 [2024-07-16 00:57:13.864032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.864047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.306 [2024-07-16 00:57:13.864060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.864076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.306 [2024-07-16 00:57:13.864089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.864104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.306 [2024-07-16 00:57:13.864117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.864132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.306 [2024-07-16 00:57:13.864157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.864173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.306 [2024-07-16 00:57:13.864186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.864217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.306 [2024-07-16 00:57:13.864230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.864841] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfbbbf0 was disconnected and freed. reset controller. 00:20:39.306 [2024-07-16 00:57:13.864975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.864998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb9660 is same with the state(5) to be set 00:20:39.306 [2024-07-16 00:57:13.865193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9300 (9): Bad file descriptor 00:20:39.306 [2024-07-16 00:57:13.865248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32d70 is same with the state(5) to be set 00:20:39.306 [2024-07-16 00:57:13.865412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x910610 is same with the state(5) to be set 00:20:39.306 [2024-07-16 00:57:13.865572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfda4c0 is same with the state(5) to be set 00:20:39.306 [2024-07-16 00:57:13.865746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3a6c0 is same with the state(5) to be set 00:20:39.306 [2024-07-16 00:57:13.865925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.865974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.865988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.866001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.866015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.866028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.866041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a7b0 is same with the state(5) to be set 00:20:39.306 [2024-07-16 00:57:13.866085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.306 [2024-07-16 00:57:13.866105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.306 [2024-07-16 00:57:13.866120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.307 [2024-07-16 00:57:13.866145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.866160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.307 [2024-07-16 00:57:13.866173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.866192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.307 [2024-07-16 00:57:13.866206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.866219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedd680 is same with the state(5) to be set 00:20:39.307 [2024-07-16 00:57:13.866249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0eab0 (9): Bad file descriptor 00:20:39.307 [2024-07-16 00:57:13.866297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.307 [2024-07-16 00:57:13.866318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.866333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.307 [2024-07-16 00:57:13.866347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.866362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.307 [2024-07-16 00:57:13.866375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.866390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.307 [2024-07-16 00:57:13.866404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.866417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4b030 is same with the state(5) to be set 00:20:39.307 [2024-07-16 00:57:13.868766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.868799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.868824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.868841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.868858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.868873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.868898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.868924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.868941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.868955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.868971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.868986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.307 [2024-07-16 00:57:13.869612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.307 [2024-07-16 00:57:13.869628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.869973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.869989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.870003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.891979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.308 [2024-07-16 00:57:13.892831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.308 [2024-07-16 00:57:13.892989] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe088a0 was disconnected and freed. reset controller. 00:20:39.308 [2024-07-16 00:57:13.893512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:39.308 [2024-07-16 00:57:13.893644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9660 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.893700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe32d70 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.893744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910610 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.893790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfda4c0 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.893835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3a6c0 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.893890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4a7b0 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.893927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedd680 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.893966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4b030 (9): Bad file descriptor 00:20:39.308 [2024-07-16 00:57:13.895992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:39.309 [2024-07-16 00:57:13.896245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.309 [2024-07-16 00:57:13.896274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb9300 with addr=10.0.0.2, port=4420 00:20:39.309 [2024-07-16 00:57:13.896292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb9300 is same with the state(5) to be set 00:20:39.309 [2024-07-16 00:57:13.896358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.896981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.896998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.309 [2024-07-16 00:57:13.897350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.309 [2024-07-16 00:57:13.897364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.897978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.897994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.898323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.898337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.899764] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.310 [2024-07-16 00:57:13.899848] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.310 [2024-07-16 00:57:13.899936] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.310 [2024-07-16 00:57:13.900009] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.310 [2024-07-16 00:57:13.900099] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.310 [2024-07-16 00:57:13.900166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.310 [2024-07-16 00:57:13.900409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.310 [2024-07-16 00:57:13.900440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe32d70 with addr=10.0.0.2, port=4420 00:20:39.310 [2024-07-16 00:57:13.900457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32d70 is same with the state(5) to be set 00:20:39.310 [2024-07-16 00:57:13.900481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9300 (9): Bad file descriptor 00:20:39.310 [2024-07-16 00:57:13.900584] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.310 [2024-07-16 00:57:13.901142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.310 [2024-07-16 00:57:13.901171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0eab0 with addr=10.0.0.2, port=4420 00:20:39.310 [2024-07-16 00:57:13.901189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eab0 is same with the state(5) to be set 00:20:39.310 [2024-07-16 00:57:13.901208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe32d70 (9): Bad file descriptor 00:20:39.310 [2024-07-16 00:57:13.901227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:39.310 [2024-07-16 00:57:13.901241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:39.310 [2024-07-16 00:57:13.901257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:39.310 [2024-07-16 00:57:13.901593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.901617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.901640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.310 [2024-07-16 00:57:13.901657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.310 [2024-07-16 00:57:13.901674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.901978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.901994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.311 [2024-07-16 00:57:13.902506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.311 [2024-07-16 00:57:13.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.902977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.902992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.903633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.903647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa6dd0 is same with the state(5) to be set 00:20:39.312 [2024-07-16 00:57:13.903724] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa6dd0 was disconnected and freed. reset controller. 00:20:39.312 [2024-07-16 00:57:13.903785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.312 [2024-07-16 00:57:13.903815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0eab0 (9): Bad file descriptor 00:20:39.312 [2024-07-16 00:57:13.903839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:39.312 [2024-07-16 00:57:13.903854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:39.312 [2024-07-16 00:57:13.903867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:39.312 [2024-07-16 00:57:13.903933] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.312 [2024-07-16 00:57:13.905186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.312 [2024-07-16 00:57:13.905241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.312 [2024-07-16 00:57:13.905258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.312 [2024-07-16 00:57:13.905272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.312 [2024-07-16 00:57:13.905363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.312 [2024-07-16 00:57:13.905386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.312 [2024-07-16 00:57:13.905408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.905980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.905996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.313 [2024-07-16 00:57:13.906680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.313 [2024-07-16 00:57:13.906695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.906975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.906989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.907324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.907338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa5980 is same with the state(5) to be set 00:20:39.314 [2024-07-16 00:57:13.908597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.908983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.908998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.314 [2024-07-16 00:57:13.909284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.314 [2024-07-16 00:57:13.909301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.909973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.909987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.315 [2024-07-16 00:57:13.910407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.315 [2024-07-16 00:57:13.910421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.910454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.910484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.910514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.910545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.910575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.910606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.910636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.910652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe09db0 is same with the state(5) to be set 00:20:39.316 [2024-07-16 00:57:13.911907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.911930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.911951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.911966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.911983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.911997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.912981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.912997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.316 [2024-07-16 00:57:13.913011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.316 [2024-07-16 00:57:13.913027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.913924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.913938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594800 is same with the state(5) to be set 00:20:39.317 [2024-07-16 00:57:13.915182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.317 [2024-07-16 00:57:13.915580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.317 [2024-07-16 00:57:13.915596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.915982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.915998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.318 [2024-07-16 00:57:13.916919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.318 [2024-07-16 00:57:13.916935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.916949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.916965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.916980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.916996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.917010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.917027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.917042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.917058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.917073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.917089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.917103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.917119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.917133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.917149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.917164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.917178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c210 is same with the state(5) to be set 00:20:39.319 [2024-07-16 00:57:13.918407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.918976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.918993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.319 [2024-07-16 00:57:13.919367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.319 [2024-07-16 00:57:13.919381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.919886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.919901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.926979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.926995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.927009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.927025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.927039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.927055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.927070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.927086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.927100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.927116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.927130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.927147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e3e70 is same with the state(5) to be set 00:20:39.320 [2024-07-16 00:57:13.928510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.928536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.928563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.928579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.928596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.928610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.928626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.928640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.928656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.320 [2024-07-16 00:57:13.928670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.320 [2024-07-16 00:57:13.928687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.928974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.928989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.929978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.929994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.930008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.930024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.321 [2024-07-16 00:57:13.930038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.321 [2024-07-16 00:57:13.930054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.322 [2024-07-16 00:57:13.930526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.322 [2024-07-16 00:57:13.930542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfba710 is same with the state(5) to be set 00:20:39.322 [2024-07-16 00:57:13.932553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:39.322 [2024-07-16 00:57:13.932589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:39.322 [2024-07-16 00:57:13.932610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.322 [2024-07-16 00:57:13.932626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:39.322 [2024-07-16 00:57:13.932644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:39.322 [2024-07-16 00:57:13.932749] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.322 [2024-07-16 00:57:13.932778] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.322 [2024-07-16 00:57:13.932803] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.322 [2024-07-16 00:57:13.932823] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.322 [2024-07-16 00:57:13.933220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:39.322 [2024-07-16 00:57:13.933247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:39.322 [2024-07-16 00:57:13.933264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:39.322 task offset: 23808 on job bdev=Nvme10n1 fails 00:20:39.322 00:20:39.322 Latency(us) 00:20:39.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.322 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme1n1 ended in about 0.86 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme1n1 : 0.86 155.37 9.71 74.77 0.00 274817.74 19126.80 288940.94 00:20:39.322 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme2n1 ended in about 0.86 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme2n1 : 0.86 152.60 9.54 73.99 0.00 273144.44 17961.72 250104.79 00:20:39.322 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme3n1 ended in about 0.86 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme3n1 : 0.86 153.20 9.57 74.28 0.00 266180.93 18641.35 285834.05 00:20:39.322 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme4n1 ended in about 0.85 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme4n1 : 0.85 150.26 9.39 75.13 0.00 262495.95 22622.06 320009.86 00:20:39.322 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme5n1 ended in about 0.87 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme5n1 : 0.87 147.41 9.21 73.71 0.00 261835.03 19903.53 254765.13 00:20:39.322 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme6n1 ended in about 0.87 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme6n1 : 0.87 146.86 9.18 73.43 0.00 256961.04 21359.88 264085.81 00:20:39.322 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme7n1 ended in about 0.87 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme7n1 : 0.87 146.32 9.14 73.16 0.00 252038.76 21456.97 271853.04 00:20:39.322 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme8n1 ended in about 0.88 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme8n1 : 0.88 149.19 9.32 72.33 0.00 244269.22 21554.06 260978.92 00:20:39.322 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme9n1 ended in about 0.89 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme9n1 : 0.89 144.12 9.01 72.06 0.00 244528.17 26602.76 253211.69 00:20:39.322 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.322 Job: Nvme10n1 ended in about 0.83 seconds with error 00:20:39.322 Verification LBA range: start 0x0 length 0x400 00:20:39.322 Nvme10n1 : 0.83 155.13 9.70 77.56 0.00 217812.13 18544.26 267192.70 00:20:39.322 =================================================================================================================== 00:20:39.322 Total : 1500.47 93.78 740.42 0.00 255494.31 17961.72 320009.86 00:20:39.322 [2024-07-16 00:57:13.960170] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:39.322 [2024-07-16 00:57:13.960262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:39.322 [2024-07-16 00:57:13.960661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.322 [2024-07-16 00:57:13.960710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3a6c0 with addr=10.0.0.2, port=4420 00:20:39.322 [2024-07-16 00:57:13.960730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3a6c0 is same with the state(5) to be set 00:20:39.322 [2024-07-16 00:57:13.961014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.322 [2024-07-16 00:57:13.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb9300 with addr=10.0.0.2, port=4420 00:20:39.322 [2024-07-16 00:57:13.961058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb9300 is same with the state(5) to be set 00:20:39.322 [2024-07-16 00:57:13.961249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.322 [2024-07-16 00:57:13.961276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfda4c0 with addr=10.0.0.2, port=4420 00:20:39.322 [2024-07-16 00:57:13.961293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfda4c0 is same with the state(5) to be set 00:20:39.322 [2024-07-16 00:57:13.961442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.322 [2024-07-16 00:57:13.961476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4a7b0 with addr=10.0.0.2, port=4420 00:20:39.322 [2024-07-16 00:57:13.961492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a7b0 is same with the state(5) to be set 00:20:39.322 [2024-07-16 00:57:13.963399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.322 [2024-07-16 00:57:13.963430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x910610 with addr=10.0.0.2, port=4420 00:20:39.322 [2024-07-16 00:57:13.963453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x910610 is same with the state(5) to be set 00:20:39.322 [2024-07-16 00:57:13.963622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.322 [2024-07-16 00:57:13.963649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4b030 with addr=10.0.0.2, port=4420 00:20:39.322 [2024-07-16 00:57:13.963666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4b030 is same with the state(5) to be set 00:20:39.323 [2024-07-16 00:57:13.963840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.323 [2024-07-16 00:57:13.963867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedd680 with addr=10.0.0.2, port=4420 00:20:39.323 [2024-07-16 00:57:13.963902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedd680 is same with the state(5) to be set 00:20:39.323 [2024-07-16 00:57:13.964042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.323 [2024-07-16 00:57:13.964068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb9660 with addr=10.0.0.2, port=4420 00:20:39.323 [2024-07-16 00:57:13.964085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb9660 is same with the state(5) to be set 00:20:39.323 [2024-07-16 00:57:13.964112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3a6c0 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9300 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfda4c0 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4a7b0 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964231] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.323 [2024-07-16 00:57:13.964269] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.323 [2024-07-16 00:57:13.964294] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.323 [2024-07-16 00:57:13.964315] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.323 [2024-07-16 00:57:13.964333] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.323 [2024-07-16 00:57:13.964352] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:39.323 [2024-07-16 00:57:13.964659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:39.323 [2024-07-16 00:57:13.964690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.323 [2024-07-16 00:57:13.964750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910610 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4b030 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedd680 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9660 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.964829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.964844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.964870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:39.323 [2024-07-16 00:57:13.964899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.964914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.964928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:39.323 [2024-07-16 00:57:13.964945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.964964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.964977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:39.323 [2024-07-16 00:57:13.964993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.965007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.965020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:39.323 [2024-07-16 00:57:13.965109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.323 [2024-07-16 00:57:13.965346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe32d70 with addr=10.0.0.2, port=4420 00:20:39.323 [2024-07-16 00:57:13.965362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32d70 is same with the state(5) to be set 00:20:39.323 [2024-07-16 00:57:13.965508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.323 [2024-07-16 00:57:13.965535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0eab0 with addr=10.0.0.2, port=4420 00:20:39.323 [2024-07-16 00:57:13.965551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eab0 is same with the state(5) to be set 00:20:39.323 [2024-07-16 00:57:13.965565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.965579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.965592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:39.323 [2024-07-16 00:57:13.965609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.965623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.965636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:39.323 [2024-07-16 00:57:13.965651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.965665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.965677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:39.323 [2024-07-16 00:57:13.965693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.965706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.965719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:39.323 [2024-07-16 00:57:13.965755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.965818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe32d70 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.965838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0eab0 (9): Bad file descriptor 00:20:39.323 [2024-07-16 00:57:13.965901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.965922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.965936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:39.323 [2024-07-16 00:57:13.965953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.323 [2024-07-16 00:57:13.965967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.323 [2024-07-16 00:57:13.965979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.323 [2024-07-16 00:57:13.966017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.323 [2024-07-16 00:57:13.966035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.891 00:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:39.891 00:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2689128 00:20:40.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2689128) - No such process 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.828 rmmod nvme_tcp 00:20:40.828 rmmod nvme_fabrics 00:20:40.828 rmmod nvme_keyring 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.828 00:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.378 00:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.378 00:20:43.378 real 0m7.896s 00:20:43.378 user 0m19.685s 00:20:43.378 sys 0m1.430s 00:20:43.378 00:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.378 00:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.378 ************************************ 00:20:43.378 END TEST nvmf_shutdown_tc3 00:20:43.378 ************************************ 00:20:43.378 00:57:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:43.378 00:57:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:43.378 00:20:43.378 real 0m29.123s 00:20:43.378 user 1m23.867s 00:20:43.378 sys 0m6.621s 00:20:43.378 00:57:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.378 00:57:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:43.378 ************************************ 00:20:43.378 END TEST nvmf_shutdown 00:20:43.378 ************************************ 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:43.378 00:57:17 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.378 00:57:17 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.378 00:57:17 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:43.378 00:57:17 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.378 00:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.378 ************************************ 00:20:43.378 START TEST nvmf_multicontroller 00:20:43.378 ************************************ 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:43.378 * Looking for test storage... 00:20:43.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.378 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.379 00:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:45.278 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:45.278 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:45.278 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:45.278 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:45.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:20:45.278 00:20:45.278 --- 10.0.0.2 ping statistics --- 00:20:45.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.278 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:20:45.278 00:20:45.278 --- 10.0.0.1 ping statistics --- 00:20:45.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.278 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2691530 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2691530 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2691530 ']' 00:20:45.278 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.279 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.279 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.279 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.279 00:57:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.279 [2024-07-16 00:57:19.949360] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:45.279 [2024-07-16 00:57:19.949455] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.279 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.279 [2024-07-16 00:57:20.026277] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:45.536 [2024-07-16 00:57:20.142019] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.536 [2024-07-16 00:57:20.142078] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.536 [2024-07-16 00:57:20.142106] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.536 [2024-07-16 00:57:20.142119] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.536 [2024-07-16 00:57:20.142128] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.536 [2024-07-16 00:57:20.142199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.536 [2024-07-16 00:57:20.142259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.536 [2024-07-16 00:57:20.142262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.536 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.536 [2024-07-16 00:57:20.289385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 Malloc0 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 [2024-07-16 00:57:20.349619] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 [2024-07-16 00:57:20.357487] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 Malloc1 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2691667 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2691667 /var/tmp/bdevperf.sock 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2691667 ']' 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.795 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.053 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.053 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:46.053 00:57:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:46.053 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.053 00:57:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.312 NVMe0n1 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.312 1 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.312 request: 00:20:46.312 { 00:20:46.312 "name": "NVMe0", 00:20:46.312 "trtype": "tcp", 00:20:46.312 "traddr": "10.0.0.2", 00:20:46.312 "adrfam": "ipv4", 00:20:46.312 "trsvcid": "4420", 00:20:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.312 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:46.312 "hostaddr": "10.0.0.2", 00:20:46.312 "hostsvcid": "60000", 00:20:46.312 "prchk_reftag": false, 00:20:46.312 "prchk_guard": false, 00:20:46.312 "hdgst": false, 00:20:46.312 "ddgst": false, 00:20:46.312 "method": "bdev_nvme_attach_controller", 00:20:46.312 "req_id": 1 00:20:46.312 } 00:20:46.312 Got JSON-RPC error response 00:20:46.312 response: 00:20:46.312 { 00:20:46.312 "code": -114, 00:20:46.312 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:46.312 } 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.312 request: 00:20:46.312 { 00:20:46.312 "name": "NVMe0", 00:20:46.312 "trtype": "tcp", 00:20:46.312 "traddr": "10.0.0.2", 00:20:46.312 "adrfam": "ipv4", 00:20:46.312 "trsvcid": "4420", 00:20:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.312 "hostaddr": "10.0.0.2", 00:20:46.312 "hostsvcid": "60000", 00:20:46.312 "prchk_reftag": false, 00:20:46.312 "prchk_guard": false, 00:20:46.312 "hdgst": false, 00:20:46.312 "ddgst": false, 00:20:46.312 "method": "bdev_nvme_attach_controller", 00:20:46.312 "req_id": 1 00:20:46.312 } 00:20:46.312 Got JSON-RPC error response 00:20:46.312 response: 00:20:46.312 { 00:20:46.312 "code": -114, 00:20:46.312 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:46.312 } 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.312 request: 00:20:46.312 { 00:20:46.312 "name": "NVMe0", 00:20:46.312 "trtype": "tcp", 00:20:46.312 "traddr": "10.0.0.2", 00:20:46.312 "adrfam": "ipv4", 00:20:46.312 "trsvcid": "4420", 00:20:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.312 "hostaddr": "10.0.0.2", 00:20:46.312 "hostsvcid": "60000", 00:20:46.312 "prchk_reftag": false, 00:20:46.312 "prchk_guard": false, 00:20:46.312 "hdgst": false, 00:20:46.312 "ddgst": false, 00:20:46.312 "multipath": "disable", 00:20:46.312 "method": "bdev_nvme_attach_controller", 00:20:46.312 "req_id": 1 00:20:46.312 } 00:20:46.312 Got JSON-RPC error response 00:20:46.312 response: 00:20:46.312 { 00:20:46.312 "code": -114, 00:20:46.312 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:46.312 } 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:46.312 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.313 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.313 request: 00:20:46.313 { 00:20:46.313 "name": "NVMe0", 00:20:46.313 "trtype": "tcp", 00:20:46.313 "traddr": "10.0.0.2", 00:20:46.313 "adrfam": "ipv4", 00:20:46.313 "trsvcid": "4420", 00:20:46.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.313 "hostaddr": "10.0.0.2", 00:20:46.313 "hostsvcid": "60000", 00:20:46.313 "prchk_reftag": false, 00:20:46.313 "prchk_guard": false, 00:20:46.313 "hdgst": false, 00:20:46.313 "ddgst": false, 00:20:46.313 "multipath": "failover", 00:20:46.570 "method": "bdev_nvme_attach_controller", 00:20:46.570 "req_id": 1 00:20:46.570 } 00:20:46.570 Got JSON-RPC error response 00:20:46.570 response: 00:20:46.570 { 00:20:46.570 "code": -114, 00:20:46.570 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:46.570 } 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.570 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.570 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.830 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:46.830 00:57:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:48.208 0 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2691667 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2691667 ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2691667 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2691667 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2691667' 00:20:48.208 killing process with pid 2691667 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2691667 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2691667 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:48.208 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:48.208 [2024-07-16 00:57:20.463510] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:48.208 [2024-07-16 00:57:20.463593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691667 ] 00:20:48.208 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.208 [2024-07-16 00:57:20.523650] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.208 [2024-07-16 00:57:20.632568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.208 [2024-07-16 00:57:21.410910] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 97a59236-076c-4d85-a9dd-d2070d1b1f1b already exists 00:20:48.208 [2024-07-16 00:57:21.410951] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:97a59236-076c-4d85-a9dd-d2070d1b1f1b alias for bdev NVMe1n1 00:20:48.208 [2024-07-16 00:57:21.410966] bdev_nvme.c:4322:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:48.208 Running I/O for 1 seconds... 00:20:48.208 00:20:48.208 Latency(us) 00:20:48.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.208 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:48.208 NVMe0n1 : 1.01 18398.98 71.87 0.00 0.00 6938.71 3616.62 11213.94 00:20:48.208 =================================================================================================================== 00:20:48.208 Total : 18398.98 71.87 0.00 0.00 6938.71 3616.62 11213.94 00:20:48.208 Received shutdown signal, test time was about 1.000000 seconds 00:20:48.208 00:20:48.208 Latency(us) 00:20:48.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.208 =================================================================================================================== 00:20:48.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.208 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.208 rmmod nvme_tcp 00:20:48.208 rmmod nvme_fabrics 00:20:48.208 rmmod nvme_keyring 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2691530 ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2691530 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2691530 ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2691530 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2691530 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2691530' 00:20:48.208 killing process with pid 2691530 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2691530 00:20:48.208 00:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2691530 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.775 00:57:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.681 00:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.681 00:20:50.681 real 0m7.676s 00:20:50.681 user 0m12.331s 00:20:50.681 sys 0m2.332s 00:20:50.681 00:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.681 00:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.681 ************************************ 00:20:50.681 END TEST nvmf_multicontroller 00:20:50.681 ************************************ 00:20:50.681 00:57:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:50.681 00:57:25 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:50.681 00:57:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:50.681 00:57:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.681 00:57:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.681 ************************************ 00:20:50.681 START TEST nvmf_aer 00:20:50.681 ************************************ 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:50.681 * Looking for test storage... 00:20:50.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.681 00:57:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.584 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:52.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:52.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:52.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:52.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.585 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:20:52.844 00:20:52.844 --- 10.0.0.2 ping statistics --- 00:20:52.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.844 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:20:52.844 00:20:52.844 --- 10.0.0.1 ping statistics --- 00:20:52.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.844 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2693880 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2693880 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2693880 ']' 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.844 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.845 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.845 00:57:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.845 [2024-07-16 00:57:27.502003] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:52.845 [2024-07-16 00:57:27.502080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.845 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.845 [2024-07-16 00:57:27.569794] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.103 [2024-07-16 00:57:27.683574] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.103 [2024-07-16 00:57:27.683642] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.103 [2024-07-16 00:57:27.683655] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.103 [2024-07-16 00:57:27.683665] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.103 [2024-07-16 00:57:27.683674] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.103 [2024-07-16 00:57:27.683770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.103 [2024-07-16 00:57:27.685896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.103 [2024-07-16 00:57:27.685972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.103 [2024-07-16 00:57:27.685976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.041 [2024-07-16 00:57:28.514051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.041 Malloc0 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.041 [2024-07-16 00:57:28.567985] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.041 [ 00:20:54.041 { 00:20:54.041 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:54.041 "subtype": "Discovery", 00:20:54.041 "listen_addresses": [], 00:20:54.041 "allow_any_host": true, 00:20:54.041 "hosts": [] 00:20:54.041 }, 00:20:54.041 { 00:20:54.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.041 "subtype": "NVMe", 00:20:54.041 "listen_addresses": [ 00:20:54.041 { 00:20:54.041 "trtype": "TCP", 00:20:54.041 "adrfam": "IPv4", 00:20:54.041 "traddr": "10.0.0.2", 00:20:54.041 "trsvcid": "4420" 00:20:54.041 } 00:20:54.041 ], 00:20:54.041 "allow_any_host": true, 00:20:54.041 "hosts": [], 00:20:54.041 "serial_number": "SPDK00000000000001", 00:20:54.041 "model_number": "SPDK bdev Controller", 00:20:54.041 "max_namespaces": 2, 00:20:54.041 "min_cntlid": 1, 00:20:54.041 "max_cntlid": 65519, 00:20:54.041 "namespaces": [ 00:20:54.041 { 00:20:54.041 "nsid": 1, 00:20:54.041 "bdev_name": "Malloc0", 00:20:54.041 "name": "Malloc0", 00:20:54.041 "nguid": "7860CC4CACE542C98C7303C9BA766970", 00:20:54.041 "uuid": "7860cc4c-ace5-42c9-8c73-03c9ba766970" 00:20:54.041 } 00:20:54.041 ] 00:20:54.041 } 00:20:54.041 ] 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2694037 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:54.041 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.041 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.301 Malloc1 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.301 [ 00:20:54.301 { 00:20:54.301 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:54.301 "subtype": "Discovery", 00:20:54.301 "listen_addresses": [], 00:20:54.301 "allow_any_host": true, 00:20:54.301 "hosts": [] 00:20:54.301 }, 00:20:54.301 { 00:20:54.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.301 "subtype": "NVMe", 00:20:54.301 "listen_addresses": [ 00:20:54.301 { 00:20:54.301 "trtype": "TCP", 00:20:54.301 "adrfam": "IPv4", 00:20:54.301 "traddr": "10.0.0.2", 00:20:54.301 "trsvcid": "4420" 00:20:54.301 } 00:20:54.301 ], 00:20:54.301 "allow_any_host": true, 00:20:54.301 "hosts": [], 00:20:54.301 "serial_number": "SPDK00000000000001", 00:20:54.301 "model_number": "SPDK bdev Controller", 00:20:54.301 "max_namespaces": 2, 00:20:54.301 "min_cntlid": 1, 00:20:54.301 "max_cntlid": 65519, 00:20:54.301 "namespaces": [ 00:20:54.301 { 00:20:54.301 "nsid": 1, 00:20:54.301 "bdev_name": "Malloc0", 00:20:54.301 "name": "Malloc0", 00:20:54.301 "nguid": "7860CC4CACE542C98C7303C9BA766970", 00:20:54.301 "uuid": "7860cc4c-ace5-42c9-8c73-03c9ba766970" 00:20:54.301 }, 00:20:54.301 { 00:20:54.301 "nsid": 2, 00:20:54.301 "bdev_name": "Malloc1", 00:20:54.301 Asynchronous Event Request test 00:20:54.301 Attaching to 10.0.0.2 00:20:54.301 Attached to 10.0.0.2 00:20:54.301 Registering asynchronous event callbacks... 00:20:54.301 Starting namespace attribute notice tests for all controllers... 00:20:54.301 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:54.301 aer_cb - Changed Namespace 00:20:54.301 Cleaning up... 00:20:54.301 "name": "Malloc1", 00:20:54.301 "nguid": "D43FACA883ED473D92999B29FC11AE9E", 00:20:54.301 "uuid": "d43faca8-83ed-473d-9299-9b29fc11ae9e" 00:20:54.301 } 00:20:54.301 ] 00:20:54.301 } 00:20:54.301 ] 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2694037 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:54.301 00:57:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:54.301 rmmod nvme_tcp 00:20:54.301 rmmod nvme_fabrics 00:20:54.301 rmmod nvme_keyring 00:20:54.301 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:54.301 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:54.301 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:54.301 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2693880 ']' 00:20:54.301 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2693880 00:20:54.301 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2693880 ']' 00:20:54.302 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2693880 00:20:54.302 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:54.302 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.302 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2693880 00:20:54.561 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:54.561 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:54.561 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2693880' 00:20:54.561 killing process with pid 2693880 00:20:54.561 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2693880 00:20:54.561 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2693880 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.822 00:57:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.730 00:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:56.730 00:20:56.730 real 0m6.026s 00:20:56.730 user 0m7.259s 00:20:56.730 sys 0m1.904s 00:20:56.730 00:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.730 00:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.730 ************************************ 00:20:56.730 END TEST nvmf_aer 00:20:56.730 ************************************ 00:20:56.730 00:57:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:56.730 00:57:31 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:56.730 00:57:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:56.730 00:57:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.730 00:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.730 ************************************ 00:20:56.730 START TEST nvmf_async_init 00:20:56.730 ************************************ 00:20:56.730 00:57:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:56.730 * Looking for test storage... 00:20:56.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:56.730 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.730 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:56.730 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.989 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bd18bc32313f46c780ff396416ce5b4c 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:56.990 00:57:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:58.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:58.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:58.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:58.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:20:58.896 00:20:58.896 --- 10.0.0.2 ping statistics --- 00:20:58.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.896 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:20:58.896 00:20:58.896 --- 10.0.0.1 ping statistics --- 00:20:58.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.896 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2696037 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2696037 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2696037 ']' 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.896 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.897 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.897 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.897 [2024-07-16 00:57:33.606711] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:20:58.897 [2024-07-16 00:57:33.606796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.897 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.154 [2024-07-16 00:57:33.674728] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.154 [2024-07-16 00:57:33.792810] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.154 [2024-07-16 00:57:33.792869] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.154 [2024-07-16 00:57:33.792893] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.154 [2024-07-16 00:57:33.792907] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.154 [2024-07-16 00:57:33.792939] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.154 [2024-07-16 00:57:33.792968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.154 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.154 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:59.154 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.154 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.154 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.413 [2024-07-16 00:57:33.929632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.413 null0 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bd18bc32313f46c780ff396416ce5b4c 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.413 [2024-07-16 00:57:33.969900] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.413 00:57:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.672 nvme0n1 00:20:59.672 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.672 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:59.672 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.672 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.672 [ 00:20:59.672 { 00:20:59.672 "name": "nvme0n1", 00:20:59.672 "aliases": [ 00:20:59.672 "bd18bc32-313f-46c7-80ff-396416ce5b4c" 00:20:59.672 ], 00:20:59.672 "product_name": "NVMe disk", 00:20:59.672 "block_size": 512, 00:20:59.672 "num_blocks": 2097152, 00:20:59.672 "uuid": "bd18bc32-313f-46c7-80ff-396416ce5b4c", 00:20:59.672 "assigned_rate_limits": { 00:20:59.672 "rw_ios_per_sec": 0, 00:20:59.672 "rw_mbytes_per_sec": 0, 00:20:59.672 "r_mbytes_per_sec": 0, 00:20:59.672 "w_mbytes_per_sec": 0 00:20:59.672 }, 00:20:59.672 "claimed": false, 00:20:59.672 "zoned": false, 00:20:59.672 "supported_io_types": { 00:20:59.672 "read": true, 00:20:59.672 "write": true, 00:20:59.672 "unmap": false, 00:20:59.672 "flush": true, 00:20:59.672 "reset": true, 00:20:59.672 "nvme_admin": true, 00:20:59.672 "nvme_io": true, 00:20:59.672 "nvme_io_md": false, 00:20:59.672 "write_zeroes": true, 00:20:59.672 "zcopy": false, 00:20:59.672 "get_zone_info": false, 00:20:59.672 "zone_management": false, 00:20:59.672 "zone_append": false, 00:20:59.672 "compare": true, 00:20:59.672 "compare_and_write": true, 00:20:59.672 "abort": true, 00:20:59.672 "seek_hole": false, 00:20:59.672 "seek_data": false, 00:20:59.672 "copy": true, 00:20:59.672 "nvme_iov_md": false 00:20:59.672 }, 00:20:59.672 "memory_domains": [ 00:20:59.672 { 00:20:59.672 "dma_device_id": "system", 00:20:59.672 "dma_device_type": 1 00:20:59.672 } 00:20:59.672 ], 00:20:59.672 "driver_specific": { 00:20:59.672 "nvme": [ 00:20:59.672 { 00:20:59.672 "trid": { 00:20:59.672 "trtype": "TCP", 00:20:59.672 "adrfam": "IPv4", 00:20:59.672 "traddr": "10.0.0.2", 00:20:59.672 "trsvcid": "4420", 00:20:59.672 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:59.672 }, 00:20:59.672 "ctrlr_data": { 00:20:59.672 "cntlid": 1, 00:20:59.672 "vendor_id": "0x8086", 00:20:59.672 "model_number": "SPDK bdev Controller", 00:20:59.672 "serial_number": "00000000000000000000", 00:20:59.672 "firmware_revision": "24.09", 00:20:59.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.672 "oacs": { 00:20:59.672 "security": 0, 00:20:59.672 "format": 0, 00:20:59.672 "firmware": 0, 00:20:59.672 "ns_manage": 0 00:20:59.672 }, 00:20:59.672 "multi_ctrlr": true, 00:20:59.672 "ana_reporting": false 00:20:59.672 }, 00:20:59.672 "vs": { 00:20:59.672 "nvme_version": "1.3" 00:20:59.673 }, 00:20:59.673 "ns_data": { 00:20:59.673 "id": 1, 00:20:59.673 "can_share": true 00:20:59.673 } 00:20:59.673 } 00:20:59.673 ], 00:20:59.673 "mp_policy": "active_passive" 00:20:59.673 } 00:20:59.673 } 00:20:59.673 ] 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.673 [2024-07-16 00:57:34.223192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:59.673 [2024-07-16 00:57:34.223281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df02b0 (9): Bad file descriptor 00:20:59.673 [2024-07-16 00:57:34.396041] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.673 [ 00:20:59.673 { 00:20:59.673 "name": "nvme0n1", 00:20:59.673 "aliases": [ 00:20:59.673 "bd18bc32-313f-46c7-80ff-396416ce5b4c" 00:20:59.673 ], 00:20:59.673 "product_name": "NVMe disk", 00:20:59.673 "block_size": 512, 00:20:59.673 "num_blocks": 2097152, 00:20:59.673 "uuid": "bd18bc32-313f-46c7-80ff-396416ce5b4c", 00:20:59.673 "assigned_rate_limits": { 00:20:59.673 "rw_ios_per_sec": 0, 00:20:59.673 "rw_mbytes_per_sec": 0, 00:20:59.673 "r_mbytes_per_sec": 0, 00:20:59.673 "w_mbytes_per_sec": 0 00:20:59.673 }, 00:20:59.673 "claimed": false, 00:20:59.673 "zoned": false, 00:20:59.673 "supported_io_types": { 00:20:59.673 "read": true, 00:20:59.673 "write": true, 00:20:59.673 "unmap": false, 00:20:59.673 "flush": true, 00:20:59.673 "reset": true, 00:20:59.673 "nvme_admin": true, 00:20:59.673 "nvme_io": true, 00:20:59.673 "nvme_io_md": false, 00:20:59.673 "write_zeroes": true, 00:20:59.673 "zcopy": false, 00:20:59.673 "get_zone_info": false, 00:20:59.673 "zone_management": false, 00:20:59.673 "zone_append": false, 00:20:59.673 "compare": true, 00:20:59.673 "compare_and_write": true, 00:20:59.673 "abort": true, 00:20:59.673 "seek_hole": false, 00:20:59.673 "seek_data": false, 00:20:59.673 "copy": true, 00:20:59.673 "nvme_iov_md": false 00:20:59.673 }, 00:20:59.673 "memory_domains": [ 00:20:59.673 { 00:20:59.673 "dma_device_id": "system", 00:20:59.673 "dma_device_type": 1 00:20:59.673 } 00:20:59.673 ], 00:20:59.673 "driver_specific": { 00:20:59.673 "nvme": [ 00:20:59.673 { 00:20:59.673 "trid": { 00:20:59.673 "trtype": "TCP", 00:20:59.673 "adrfam": "IPv4", 00:20:59.673 "traddr": "10.0.0.2", 00:20:59.673 "trsvcid": "4420", 00:20:59.673 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:59.673 }, 00:20:59.673 "ctrlr_data": { 00:20:59.673 "cntlid": 2, 00:20:59.673 "vendor_id": "0x8086", 00:20:59.673 "model_number": "SPDK bdev Controller", 00:20:59.673 "serial_number": "00000000000000000000", 00:20:59.673 "firmware_revision": "24.09", 00:20:59.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.673 "oacs": { 00:20:59.673 "security": 0, 00:20:59.673 "format": 0, 00:20:59.673 "firmware": 0, 00:20:59.673 "ns_manage": 0 00:20:59.673 }, 00:20:59.673 "multi_ctrlr": true, 00:20:59.673 "ana_reporting": false 00:20:59.673 }, 00:20:59.673 "vs": { 00:20:59.673 "nvme_version": "1.3" 00:20:59.673 }, 00:20:59.673 "ns_data": { 00:20:59.673 "id": 1, 00:20:59.673 "can_share": true 00:20:59.673 } 00:20:59.673 } 00:20:59.673 ], 00:20:59.673 "mp_policy": "active_passive" 00:20:59.673 } 00:20:59.673 } 00:20:59.673 ] 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.673 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0CTp1fJ37D 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0CTp1fJ37D 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.931 [2024-07-16 00:57:34.455982] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.931 [2024-07-16 00:57:34.456107] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0CTp1fJ37D 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.931 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.931 [2024-07-16 00:57:34.464009] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0CTp1fJ37D 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.932 [2024-07-16 00:57:34.472032] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.932 [2024-07-16 00:57:34.472091] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:59.932 nvme0n1 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.932 [ 00:20:59.932 { 00:20:59.932 "name": "nvme0n1", 00:20:59.932 "aliases": [ 00:20:59.932 "bd18bc32-313f-46c7-80ff-396416ce5b4c" 00:20:59.932 ], 00:20:59.932 "product_name": "NVMe disk", 00:20:59.932 "block_size": 512, 00:20:59.932 "num_blocks": 2097152, 00:20:59.932 "uuid": "bd18bc32-313f-46c7-80ff-396416ce5b4c", 00:20:59.932 "assigned_rate_limits": { 00:20:59.932 "rw_ios_per_sec": 0, 00:20:59.932 "rw_mbytes_per_sec": 0, 00:20:59.932 "r_mbytes_per_sec": 0, 00:20:59.932 "w_mbytes_per_sec": 0 00:20:59.932 }, 00:20:59.932 "claimed": false, 00:20:59.932 "zoned": false, 00:20:59.932 "supported_io_types": { 00:20:59.932 "read": true, 00:20:59.932 "write": true, 00:20:59.932 "unmap": false, 00:20:59.932 "flush": true, 00:20:59.932 "reset": true, 00:20:59.932 "nvme_admin": true, 00:20:59.932 "nvme_io": true, 00:20:59.932 "nvme_io_md": false, 00:20:59.932 "write_zeroes": true, 00:20:59.932 "zcopy": false, 00:20:59.932 "get_zone_info": false, 00:20:59.932 "zone_management": false, 00:20:59.932 "zone_append": false, 00:20:59.932 "compare": true, 00:20:59.932 "compare_and_write": true, 00:20:59.932 "abort": true, 00:20:59.932 "seek_hole": false, 00:20:59.932 "seek_data": false, 00:20:59.932 "copy": true, 00:20:59.932 "nvme_iov_md": false 00:20:59.932 }, 00:20:59.932 "memory_domains": [ 00:20:59.932 { 00:20:59.932 "dma_device_id": "system", 00:20:59.932 "dma_device_type": 1 00:20:59.932 } 00:20:59.932 ], 00:20:59.932 "driver_specific": { 00:20:59.932 "nvme": [ 00:20:59.932 { 00:20:59.932 "trid": { 00:20:59.932 "trtype": "TCP", 00:20:59.932 "adrfam": "IPv4", 00:20:59.932 "traddr": "10.0.0.2", 00:20:59.932 "trsvcid": "4421", 00:20:59.932 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:59.932 }, 00:20:59.932 "ctrlr_data": { 00:20:59.932 "cntlid": 3, 00:20:59.932 "vendor_id": "0x8086", 00:20:59.932 "model_number": "SPDK bdev Controller", 00:20:59.932 "serial_number": "00000000000000000000", 00:20:59.932 "firmware_revision": "24.09", 00:20:59.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.932 "oacs": { 00:20:59.932 "security": 0, 00:20:59.932 "format": 0, 00:20:59.932 "firmware": 0, 00:20:59.932 "ns_manage": 0 00:20:59.932 }, 00:20:59.932 "multi_ctrlr": true, 00:20:59.932 "ana_reporting": false 00:20:59.932 }, 00:20:59.932 "vs": { 00:20:59.932 "nvme_version": "1.3" 00:20:59.932 }, 00:20:59.932 "ns_data": { 00:20:59.932 "id": 1, 00:20:59.932 "can_share": true 00:20:59.932 } 00:20:59.932 } 00:20:59.932 ], 00:20:59.932 "mp_policy": "active_passive" 00:20:59.932 } 00:20:59.932 } 00:20:59.932 ] 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0CTp1fJ37D 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.932 rmmod nvme_tcp 00:20:59.932 rmmod nvme_fabrics 00:20:59.932 rmmod nvme_keyring 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2696037 ']' 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2696037 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2696037 ']' 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2696037 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2696037 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2696037' 00:20:59.932 killing process with pid 2696037 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2696037 00:20:59.932 [2024-07-16 00:57:34.659069] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:59.932 [2024-07-16 00:57:34.659103] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:59.932 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2696037 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.189 00:57:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.755 00:57:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:02.755 00:21:02.755 real 0m5.534s 00:21:02.755 user 0m2.149s 00:21:02.755 sys 0m1.766s 00:21:02.755 00:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:02.755 00:57:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.755 ************************************ 00:21:02.755 END TEST nvmf_async_init 00:21:02.755 ************************************ 00:21:02.755 00:57:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:02.755 00:57:36 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:02.755 00:57:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:02.755 00:57:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.755 00:57:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.755 ************************************ 00:21:02.755 START TEST dma 00:21:02.755 ************************************ 00:21:02.755 00:57:37 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:02.755 * Looking for test storage... 00:21:02.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.755 00:57:37 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.755 00:57:37 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.755 00:57:37 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.755 00:57:37 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.755 00:57:37 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.755 00:57:37 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.755 00:57:37 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.755 00:57:37 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:02.755 00:57:37 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.755 00:57:37 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.755 00:57:37 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:02.755 00:57:37 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:02.755 00:21:02.755 real 0m0.062s 00:21:02.755 user 0m0.032s 00:21:02.755 sys 0m0.035s 00:21:02.755 00:57:37 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:02.755 00:57:37 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:02.755 ************************************ 00:21:02.755 END TEST dma 00:21:02.755 ************************************ 00:21:02.755 00:57:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:02.755 00:57:37 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:02.755 00:57:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:02.755 00:57:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.755 00:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.755 ************************************ 00:21:02.755 START TEST nvmf_identify 00:21:02.755 ************************************ 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:02.755 * Looking for test storage... 00:21:02.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.755 00:57:37 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.756 00:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:04.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:04.660 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:04.660 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:04.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:04.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:04.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:21:04.661 00:21:04.661 --- 10.0.0.2 ping statistics --- 00:21:04.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.661 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:21:04.661 00:21:04.661 --- 10.0.0.1 ping statistics --- 00:21:04.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.661 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2698214 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2698214 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2698214 ']' 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.661 00:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.661 [2024-07-16 00:57:39.338134] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:21:04.661 [2024-07-16 00:57:39.338249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.661 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.661 [2024-07-16 00:57:39.410255] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.921 [2024-07-16 00:57:39.531488] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.921 [2024-07-16 00:57:39.531555] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.921 [2024-07-16 00:57:39.531569] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.921 [2024-07-16 00:57:39.531595] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.921 [2024-07-16 00:57:39.531605] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.921 [2024-07-16 00:57:39.531649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.921 [2024-07-16 00:57:39.531688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.921 [2024-07-16 00:57:39.531753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.921 [2024-07-16 00:57:39.531755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 [2024-07-16 00:57:40.338994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 Malloc0 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 [2024-07-16 00:57:40.410904] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 [ 00:21:05.859 { 00:21:05.859 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:05.859 "subtype": "Discovery", 00:21:05.859 "listen_addresses": [ 00:21:05.859 { 00:21:05.859 "trtype": "TCP", 00:21:05.859 "adrfam": "IPv4", 00:21:05.859 "traddr": "10.0.0.2", 00:21:05.859 "trsvcid": "4420" 00:21:05.859 } 00:21:05.859 ], 00:21:05.859 "allow_any_host": true, 00:21:05.859 "hosts": [] 00:21:05.859 }, 00:21:05.859 { 00:21:05.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.859 "subtype": "NVMe", 00:21:05.859 "listen_addresses": [ 00:21:05.859 { 00:21:05.859 "trtype": "TCP", 00:21:05.859 "adrfam": "IPv4", 00:21:05.859 "traddr": "10.0.0.2", 00:21:05.859 "trsvcid": "4420" 00:21:05.859 } 00:21:05.859 ], 00:21:05.859 "allow_any_host": true, 00:21:05.859 "hosts": [], 00:21:05.859 "serial_number": "SPDK00000000000001", 00:21:05.859 "model_number": "SPDK bdev Controller", 00:21:05.859 "max_namespaces": 32, 00:21:05.859 "min_cntlid": 1, 00:21:05.859 "max_cntlid": 65519, 00:21:05.859 "namespaces": [ 00:21:05.859 { 00:21:05.859 "nsid": 1, 00:21:05.859 "bdev_name": "Malloc0", 00:21:05.859 "name": "Malloc0", 00:21:05.859 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:05.859 "eui64": "ABCDEF0123456789", 00:21:05.859 "uuid": "9ea34c13-7cc3-4983-8361-bed65693391f" 00:21:05.859 } 00:21:05.859 ] 00:21:05.859 } 00:21:05.859 ] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.859 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:05.859 [2024-07-16 00:57:40.448728] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:21:05.859 [2024-07-16 00:57:40.448764] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698366 ] 00:21:05.859 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.859 [2024-07-16 00:57:40.483630] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:05.859 [2024-07-16 00:57:40.483699] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:05.859 [2024-07-16 00:57:40.483710] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:05.859 [2024-07-16 00:57:40.483726] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:05.859 [2024-07-16 00:57:40.483737] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:05.859 [2024-07-16 00:57:40.484645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:05.859 [2024-07-16 00:57:40.484724] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5456e0 0 00:21:05.859 [2024-07-16 00:57:40.498890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:05.859 [2024-07-16 00:57:40.498918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:05.859 [2024-07-16 00:57:40.498944] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:05.859 [2024-07-16 00:57:40.498951] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:05.859 [2024-07-16 00:57:40.499000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.499013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.499021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.859 [2024-07-16 00:57:40.499040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:05.859 [2024-07-16 00:57:40.499069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.859 [2024-07-16 00:57:40.505905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.859 [2024-07-16 00:57:40.505923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.859 [2024-07-16 00:57:40.505931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.505939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.859 [2024-07-16 00:57:40.505956] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:05.859 [2024-07-16 00:57:40.505968] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:05.859 [2024-07-16 00:57:40.505978] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:05.859 [2024-07-16 00:57:40.506001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.506010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.506016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.859 [2024-07-16 00:57:40.506028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-07-16 00:57:40.506052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.859 [2024-07-16 00:57:40.506257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.859 [2024-07-16 00:57:40.506272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.859 [2024-07-16 00:57:40.506279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.506286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.859 [2024-07-16 00:57:40.506296] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:05.859 [2024-07-16 00:57:40.506309] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:05.859 [2024-07-16 00:57:40.506322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.506329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.506335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.859 [2024-07-16 00:57:40.506361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-07-16 00:57:40.506383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.859 [2024-07-16 00:57:40.506627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.859 [2024-07-16 00:57:40.506643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.859 [2024-07-16 00:57:40.506649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.859 [2024-07-16 00:57:40.506656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.860 [2024-07-16 00:57:40.506665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:05.860 [2024-07-16 00:57:40.506679] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:05.860 [2024-07-16 00:57:40.506692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.506699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.506705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.506716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-07-16 00:57:40.506753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.860 [2024-07-16 00:57:40.506973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.860 [2024-07-16 00:57:40.506990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.860 [2024-07-16 00:57:40.506996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.860 [2024-07-16 00:57:40.507012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:05.860 [2024-07-16 00:57:40.507029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.507056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-07-16 00:57:40.507077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.860 [2024-07-16 00:57:40.507217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.860 [2024-07-16 00:57:40.507229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.860 [2024-07-16 00:57:40.507236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.860 [2024-07-16 00:57:40.507252] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:05.860 [2024-07-16 00:57:40.507261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:05.860 [2024-07-16 00:57:40.507274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:05.860 [2024-07-16 00:57:40.507389] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:05.860 [2024-07-16 00:57:40.507398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:05.860 [2024-07-16 00:57:40.507414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.507438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-07-16 00:57:40.507458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.860 [2024-07-16 00:57:40.507666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.860 [2024-07-16 00:57:40.507678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.860 [2024-07-16 00:57:40.507685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.860 [2024-07-16 00:57:40.507700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:05.860 [2024-07-16 00:57:40.507717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.507743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-07-16 00:57:40.507768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.860 [2024-07-16 00:57:40.507960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.860 [2024-07-16 00:57:40.507974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.860 [2024-07-16 00:57:40.507981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.507988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.860 [2024-07-16 00:57:40.507996] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:05.860 [2024-07-16 00:57:40.508004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:05.860 [2024-07-16 00:57:40.508017] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:05.860 [2024-07-16 00:57:40.508038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:05.860 [2024-07-16 00:57:40.508057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.508076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-07-16 00:57:40.508098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.860 [2024-07-16 00:57:40.508276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.860 [2024-07-16 00:57:40.508291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.860 [2024-07-16 00:57:40.508298] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508305] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5456e0): datao=0, datal=4096, cccid=0 00:21:05.860 [2024-07-16 00:57:40.508313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a5540) on tqpair(0x5456e0): expected_datao=0, payload_size=4096 00:21:05.860 [2024-07-16 00:57:40.508321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508353] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508364] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.860 [2024-07-16 00:57:40.508517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.860 [2024-07-16 00:57:40.508524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.860 [2024-07-16 00:57:40.508543] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:05.860 [2024-07-16 00:57:40.508551] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:05.860 [2024-07-16 00:57:40.508559] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:05.860 [2024-07-16 00:57:40.508568] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:05.860 [2024-07-16 00:57:40.508578] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:05.860 [2024-07-16 00:57:40.508585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:05.860 [2024-07-16 00:57:40.508600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:05.860 [2024-07-16 00:57:40.508621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.508648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.860 [2024-07-16 00:57:40.508685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.860 [2024-07-16 00:57:40.508910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.860 [2024-07-16 00:57:40.508926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.860 [2024-07-16 00:57:40.508933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.860 [2024-07-16 00:57:40.508953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.508977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.860 [2024-07-16 00:57:40.508988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.508995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.509001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.509010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.860 [2024-07-16 00:57:40.509019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.509026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.509032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.509041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.860 [2024-07-16 00:57:40.509051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.509057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.509064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.509073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.860 [2024-07-16 00:57:40.509082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:05.860 [2024-07-16 00:57:40.509102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:05.860 [2024-07-16 00:57:40.509115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.860 [2024-07-16 00:57:40.509122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5456e0) 00:21:05.860 [2024-07-16 00:57:40.509132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-07-16 00:57:40.509170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5540, cid 0, qid 0 00:21:05.861 [2024-07-16 00:57:40.509182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a56c0, cid 1, qid 0 00:21:05.861 [2024-07-16 00:57:40.509190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5840, cid 2, qid 0 00:21:05.861 [2024-07-16 00:57:40.509197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.861 [2024-07-16 00:57:40.509208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5b40, cid 4, qid 0 00:21:05.861 [2024-07-16 00:57:40.509466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.861 [2024-07-16 00:57:40.509482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.861 [2024-07-16 00:57:40.509489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.509495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5b40) on tqpair=0x5456e0 00:21:05.861 [2024-07-16 00:57:40.509505] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:05.861 [2024-07-16 00:57:40.509514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:05.861 [2024-07-16 00:57:40.509547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.509556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5456e0) 00:21:05.861 [2024-07-16 00:57:40.509567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.861 [2024-07-16 00:57:40.509587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5b40, cid 4, qid 0 00:21:05.861 [2024-07-16 00:57:40.509787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.861 [2024-07-16 00:57:40.509803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.861 [2024-07-16 00:57:40.509810] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.509816] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5456e0): datao=0, datal=4096, cccid=4 00:21:05.861 [2024-07-16 00:57:40.509824] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a5b40) on tqpair(0x5456e0): expected_datao=0, payload_size=4096 00:21:05.861 [2024-07-16 00:57:40.509832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.509842] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.509850] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.513905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.861 [2024-07-16 00:57:40.513921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.861 [2024-07-16 00:57:40.513927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.513934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5b40) on tqpair=0x5456e0 00:21:05.861 [2024-07-16 00:57:40.513967] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:05.861 [2024-07-16 00:57:40.514011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5456e0) 00:21:05.861 [2024-07-16 00:57:40.514033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.861 [2024-07-16 00:57:40.514045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5456e0) 00:21:05.861 [2024-07-16 00:57:40.514067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.861 [2024-07-16 00:57:40.514095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5b40, cid 4, qid 0 00:21:05.861 [2024-07-16 00:57:40.514107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5cc0, cid 5, qid 0 00:21:05.861 [2024-07-16 00:57:40.514354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.861 [2024-07-16 00:57:40.514370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.861 [2024-07-16 00:57:40.514377] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514384] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5456e0): datao=0, datal=1024, cccid=4 00:21:05.861 [2024-07-16 00:57:40.514406] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a5b40) on tqpair(0x5456e0): expected_datao=0, payload_size=1024 00:21:05.861 [2024-07-16 00:57:40.514414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514423] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514431] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.861 [2024-07-16 00:57:40.514448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.861 [2024-07-16 00:57:40.514454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.514461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5cc0) on tqpair=0x5456e0 00:21:05.861 [2024-07-16 00:57:40.555088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.861 [2024-07-16 00:57:40.555109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.861 [2024-07-16 00:57:40.555116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5b40) on tqpair=0x5456e0 00:21:05.861 [2024-07-16 00:57:40.555149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5456e0) 00:21:05.861 [2024-07-16 00:57:40.555171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.861 [2024-07-16 00:57:40.555201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5b40, cid 4, qid 0 00:21:05.861 [2024-07-16 00:57:40.555408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.861 [2024-07-16 00:57:40.555424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.861 [2024-07-16 00:57:40.555430] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555437] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5456e0): datao=0, datal=3072, cccid=4 00:21:05.861 [2024-07-16 00:57:40.555444] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a5b40) on tqpair(0x5456e0): expected_datao=0, payload_size=3072 00:21:05.861 [2024-07-16 00:57:40.555452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555478] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555488] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.861 [2024-07-16 00:57:40.555642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.861 [2024-07-16 00:57:40.555649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5b40) on tqpair=0x5456e0 00:21:05.861 [2024-07-16 00:57:40.555671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5456e0) 00:21:05.861 [2024-07-16 00:57:40.555690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.861 [2024-07-16 00:57:40.555718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a5b40, cid 4, qid 0 00:21:05.861 [2024-07-16 00:57:40.555891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.861 [2024-07-16 00:57:40.555905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.861 [2024-07-16 00:57:40.555918] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5456e0): datao=0, datal=8, cccid=4 00:21:05.861 [2024-07-16 00:57:40.555933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a5b40) on tqpair(0x5456e0): expected_datao=0, payload_size=8 00:21:05.861 [2024-07-16 00:57:40.555940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555950] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.555958] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.600902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.861 [2024-07-16 00:57:40.600920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.861 [2024-07-16 00:57:40.600927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.861 [2024-07-16 00:57:40.600950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5b40) on tqpair=0x5456e0 00:21:05.861 ===================================================== 00:21:05.861 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:05.861 ===================================================== 00:21:05.861 Controller Capabilities/Features 00:21:05.861 ================================ 00:21:05.861 Vendor ID: 0000 00:21:05.861 Subsystem Vendor ID: 0000 00:21:05.861 Serial Number: .................... 00:21:05.861 Model Number: ........................................ 00:21:05.861 Firmware Version: 24.09 00:21:05.861 Recommended Arb Burst: 0 00:21:05.861 IEEE OUI Identifier: 00 00 00 00:21:05.861 Multi-path I/O 00:21:05.861 May have multiple subsystem ports: No 00:21:05.861 May have multiple controllers: No 00:21:05.861 Associated with SR-IOV VF: No 00:21:05.861 Max Data Transfer Size: 131072 00:21:05.861 Max Number of Namespaces: 0 00:21:05.861 Max Number of I/O Queues: 1024 00:21:05.861 NVMe Specification Version (VS): 1.3 00:21:05.861 NVMe Specification Version (Identify): 1.3 00:21:05.861 Maximum Queue Entries: 128 00:21:05.861 Contiguous Queues Required: Yes 00:21:05.861 Arbitration Mechanisms Supported 00:21:05.861 Weighted Round Robin: Not Supported 00:21:05.861 Vendor Specific: Not Supported 00:21:05.861 Reset Timeout: 15000 ms 00:21:05.861 Doorbell Stride: 4 bytes 00:21:05.861 NVM Subsystem Reset: Not Supported 00:21:05.861 Command Sets Supported 00:21:05.861 NVM Command Set: Supported 00:21:05.861 Boot Partition: Not Supported 00:21:05.861 Memory Page Size Minimum: 4096 bytes 00:21:05.861 Memory Page Size Maximum: 4096 bytes 00:21:05.861 Persistent Memory Region: Not Supported 00:21:05.861 Optional Asynchronous Events Supported 00:21:05.861 Namespace Attribute Notices: Not Supported 00:21:05.861 Firmware Activation Notices: Not Supported 00:21:05.861 ANA Change Notices: Not Supported 00:21:05.861 PLE Aggregate Log Change Notices: Not Supported 00:21:05.861 LBA Status Info Alert Notices: Not Supported 00:21:05.861 EGE Aggregate Log Change Notices: Not Supported 00:21:05.861 Normal NVM Subsystem Shutdown event: Not Supported 00:21:05.862 Zone Descriptor Change Notices: Not Supported 00:21:05.862 Discovery Log Change Notices: Supported 00:21:05.862 Controller Attributes 00:21:05.862 128-bit Host Identifier: Not Supported 00:21:05.862 Non-Operational Permissive Mode: Not Supported 00:21:05.862 NVM Sets: Not Supported 00:21:05.862 Read Recovery Levels: Not Supported 00:21:05.862 Endurance Groups: Not Supported 00:21:05.862 Predictable Latency Mode: Not Supported 00:21:05.862 Traffic Based Keep ALive: Not Supported 00:21:05.862 Namespace Granularity: Not Supported 00:21:05.862 SQ Associations: Not Supported 00:21:05.862 UUID List: Not Supported 00:21:05.862 Multi-Domain Subsystem: Not Supported 00:21:05.862 Fixed Capacity Management: Not Supported 00:21:05.862 Variable Capacity Management: Not Supported 00:21:05.862 Delete Endurance Group: Not Supported 00:21:05.862 Delete NVM Set: Not Supported 00:21:05.862 Extended LBA Formats Supported: Not Supported 00:21:05.862 Flexible Data Placement Supported: Not Supported 00:21:05.862 00:21:05.862 Controller Memory Buffer Support 00:21:05.862 ================================ 00:21:05.862 Supported: No 00:21:05.862 00:21:05.862 Persistent Memory Region Support 00:21:05.862 ================================ 00:21:05.862 Supported: No 00:21:05.862 00:21:05.862 Admin Command Set Attributes 00:21:05.862 ============================ 00:21:05.862 Security Send/Receive: Not Supported 00:21:05.862 Format NVM: Not Supported 00:21:05.862 Firmware Activate/Download: Not Supported 00:21:05.862 Namespace Management: Not Supported 00:21:05.862 Device Self-Test: Not Supported 00:21:05.862 Directives: Not Supported 00:21:05.862 NVMe-MI: Not Supported 00:21:05.862 Virtualization Management: Not Supported 00:21:05.862 Doorbell Buffer Config: Not Supported 00:21:05.862 Get LBA Status Capability: Not Supported 00:21:05.862 Command & Feature Lockdown Capability: Not Supported 00:21:05.862 Abort Command Limit: 1 00:21:05.862 Async Event Request Limit: 4 00:21:05.862 Number of Firmware Slots: N/A 00:21:05.862 Firmware Slot 1 Read-Only: N/A 00:21:05.862 Firmware Activation Without Reset: N/A 00:21:05.862 Multiple Update Detection Support: N/A 00:21:05.862 Firmware Update Granularity: No Information Provided 00:21:05.862 Per-Namespace SMART Log: No 00:21:05.862 Asymmetric Namespace Access Log Page: Not Supported 00:21:05.862 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:05.862 Command Effects Log Page: Not Supported 00:21:05.862 Get Log Page Extended Data: Supported 00:21:05.862 Telemetry Log Pages: Not Supported 00:21:05.862 Persistent Event Log Pages: Not Supported 00:21:05.862 Supported Log Pages Log Page: May Support 00:21:05.862 Commands Supported & Effects Log Page: Not Supported 00:21:05.862 Feature Identifiers & Effects Log Page:May Support 00:21:05.862 NVMe-MI Commands & Effects Log Page: May Support 00:21:05.862 Data Area 4 for Telemetry Log: Not Supported 00:21:05.862 Error Log Page Entries Supported: 128 00:21:05.862 Keep Alive: Not Supported 00:21:05.862 00:21:05.862 NVM Command Set Attributes 00:21:05.862 ========================== 00:21:05.862 Submission Queue Entry Size 00:21:05.862 Max: 1 00:21:05.862 Min: 1 00:21:05.862 Completion Queue Entry Size 00:21:05.862 Max: 1 00:21:05.862 Min: 1 00:21:05.862 Number of Namespaces: 0 00:21:05.862 Compare Command: Not Supported 00:21:05.862 Write Uncorrectable Command: Not Supported 00:21:05.862 Dataset Management Command: Not Supported 00:21:05.862 Write Zeroes Command: Not Supported 00:21:05.862 Set Features Save Field: Not Supported 00:21:05.862 Reservations: Not Supported 00:21:05.862 Timestamp: Not Supported 00:21:05.862 Copy: Not Supported 00:21:05.862 Volatile Write Cache: Not Present 00:21:05.862 Atomic Write Unit (Normal): 1 00:21:05.862 Atomic Write Unit (PFail): 1 00:21:05.862 Atomic Compare & Write Unit: 1 00:21:05.862 Fused Compare & Write: Supported 00:21:05.862 Scatter-Gather List 00:21:05.862 SGL Command Set: Supported 00:21:05.862 SGL Keyed: Supported 00:21:05.862 SGL Bit Bucket Descriptor: Not Supported 00:21:05.862 SGL Metadata Pointer: Not Supported 00:21:05.862 Oversized SGL: Not Supported 00:21:05.862 SGL Metadata Address: Not Supported 00:21:05.862 SGL Offset: Supported 00:21:05.862 Transport SGL Data Block: Not Supported 00:21:05.862 Replay Protected Memory Block: Not Supported 00:21:05.862 00:21:05.862 Firmware Slot Information 00:21:05.862 ========================= 00:21:05.862 Active slot: 0 00:21:05.862 00:21:05.862 00:21:05.862 Error Log 00:21:05.862 ========= 00:21:05.862 00:21:05.862 Active Namespaces 00:21:05.862 ================= 00:21:05.862 Discovery Log Page 00:21:05.862 ================== 00:21:05.862 Generation Counter: 2 00:21:05.862 Number of Records: 2 00:21:05.862 Record Format: 0 00:21:05.862 00:21:05.862 Discovery Log Entry 0 00:21:05.862 ---------------------- 00:21:05.862 Transport Type: 3 (TCP) 00:21:05.862 Address Family: 1 (IPv4) 00:21:05.862 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:05.862 Entry Flags: 00:21:05.862 Duplicate Returned Information: 1 00:21:05.862 Explicit Persistent Connection Support for Discovery: 1 00:21:05.862 Transport Requirements: 00:21:05.862 Secure Channel: Not Required 00:21:05.862 Port ID: 0 (0x0000) 00:21:05.862 Controller ID: 65535 (0xffff) 00:21:05.862 Admin Max SQ Size: 128 00:21:05.862 Transport Service Identifier: 4420 00:21:05.862 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:05.862 Transport Address: 10.0.0.2 00:21:05.862 Discovery Log Entry 1 00:21:05.862 ---------------------- 00:21:05.862 Transport Type: 3 (TCP) 00:21:05.862 Address Family: 1 (IPv4) 00:21:05.862 Subsystem Type: 2 (NVM Subsystem) 00:21:05.862 Entry Flags: 00:21:05.862 Duplicate Returned Information: 0 00:21:05.862 Explicit Persistent Connection Support for Discovery: 0 00:21:05.862 Transport Requirements: 00:21:05.862 Secure Channel: Not Required 00:21:05.862 Port ID: 0 (0x0000) 00:21:05.862 Controller ID: 65535 (0xffff) 00:21:05.862 Admin Max SQ Size: 128 00:21:05.862 Transport Service Identifier: 4420 00:21:05.862 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:05.862 Transport Address: 10.0.0.2 [2024-07-16 00:57:40.601063] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:05.862 [2024-07-16 00:57:40.601086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5540) on tqpair=0x5456e0 00:21:05.862 [2024-07-16 00:57:40.601098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.862 [2024-07-16 00:57:40.601116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a56c0) on tqpair=0x5456e0 00:21:05.862 [2024-07-16 00:57:40.601123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.862 [2024-07-16 00:57:40.601132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a5840) on tqpair=0x5456e0 00:21:05.862 [2024-07-16 00:57:40.601139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.862 [2024-07-16 00:57:40.601147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.862 [2024-07-16 00:57:40.601155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.862 [2024-07-16 00:57:40.601169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.862 [2024-07-16 00:57:40.601187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.862 [2024-07-16 00:57:40.601193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.862 [2024-07-16 00:57:40.601204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.862 [2024-07-16 00:57:40.601230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.862 [2024-07-16 00:57:40.601429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.862 [2024-07-16 00:57:40.601442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.862 [2024-07-16 00:57:40.601449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.862 [2024-07-16 00:57:40.601456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.862 [2024-07-16 00:57:40.601467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.862 [2024-07-16 00:57:40.601475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.862 [2024-07-16 00:57:40.601481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.862 [2024-07-16 00:57:40.601492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.862 [2024-07-16 00:57:40.601518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.862 [2024-07-16 00:57:40.601738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.862 [2024-07-16 00:57:40.601751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.862 [2024-07-16 00:57:40.601762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.862 [2024-07-16 00:57:40.601769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.862 [2024-07-16 00:57:40.601778] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:05.862 [2024-07-16 00:57:40.601787] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:05.862 [2024-07-16 00:57:40.601803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.601812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.601834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.601844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.601873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.602069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.602085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.602092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.602116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.602142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.602164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.602353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.602365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.602372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.602394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.602420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.602440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.602630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.602642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.602649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.602671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.602697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.602717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.602863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.602885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.602893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.602917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.602932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.602943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.602964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.603108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.603123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.603130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.603153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.603179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.603199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.603353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.603365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.603371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.603394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.603419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.603440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.603583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.603597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.603604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.603627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.603653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.603674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.603873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.603893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.603903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.603927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.603943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.603953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.603974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.604120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.604135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.604142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.604148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.604164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.604174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.604180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.604190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.863 [2024-07-16 00:57:40.604211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.863 [2024-07-16 00:57:40.604366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.863 [2024-07-16 00:57:40.604381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.863 [2024-07-16 00:57:40.604387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.604394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.863 [2024-07-16 00:57:40.604410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.604419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.863 [2024-07-16 00:57:40.604425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.863 [2024-07-16 00:57:40.604436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.864 [2024-07-16 00:57:40.604457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.864 [2024-07-16 00:57:40.604646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.864 [2024-07-16 00:57:40.604658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.864 [2024-07-16 00:57:40.604664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.864 [2024-07-16 00:57:40.604671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.864 [2024-07-16 00:57:40.604686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.864 [2024-07-16 00:57:40.604695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.864 [2024-07-16 00:57:40.604701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.864 [2024-07-16 00:57:40.604712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.864 [2024-07-16 00:57:40.604747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.864 [2024-07-16 00:57:40.608893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.864 [2024-07-16 00:57:40.608910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.864 [2024-07-16 00:57:40.608917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.864 [2024-07-16 00:57:40.608928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.864 [2024-07-16 00:57:40.608946] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.864 [2024-07-16 00:57:40.608955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.864 [2024-07-16 00:57:40.608962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5456e0) 00:21:05.864 [2024-07-16 00:57:40.608972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.864 [2024-07-16 00:57:40.608994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a59c0, cid 3, qid 0 00:21:05.864 [2024-07-16 00:57:40.609151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.864 [2024-07-16 00:57:40.609166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.864 [2024-07-16 00:57:40.609173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.864 [2024-07-16 00:57:40.609180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a59c0) on tqpair=0x5456e0 00:21:05.864 [2024-07-16 00:57:40.609192] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:06.123 00:21:06.123 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:06.123 [2024-07-16 00:57:40.646139] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:21:06.124 [2024-07-16 00:57:40.646207] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698377 ] 00:21:06.124 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.124 [2024-07-16 00:57:40.679740] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:06.124 [2024-07-16 00:57:40.679785] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:06.124 [2024-07-16 00:57:40.679795] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:06.124 [2024-07-16 00:57:40.679808] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:06.124 [2024-07-16 00:57:40.679816] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:06.124 [2024-07-16 00:57:40.680309] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:06.124 [2024-07-16 00:57:40.680363] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4eb6e0 0 00:21:06.124 [2024-07-16 00:57:40.686891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:06.124 [2024-07-16 00:57:40.686913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:06.124 [2024-07-16 00:57:40.686922] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:06.124 [2024-07-16 00:57:40.686928] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:06.124 [2024-07-16 00:57:40.686974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.686986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.686993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.687007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:06.124 [2024-07-16 00:57:40.687034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.692905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.692923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.692930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.692937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.124 [2024-07-16 00:57:40.692950] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:06.124 [2024-07-16 00:57:40.692976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:06.124 [2024-07-16 00:57:40.692985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:06.124 [2024-07-16 00:57:40.693004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.693031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.124 [2024-07-16 00:57:40.693055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.693241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.693256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.693263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.124 [2024-07-16 00:57:40.693278] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:06.124 [2024-07-16 00:57:40.693291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:06.124 [2024-07-16 00:57:40.693303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.693328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.124 [2024-07-16 00:57:40.693349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.693605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.693621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.693628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.124 [2024-07-16 00:57:40.693643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:06.124 [2024-07-16 00:57:40.693657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:06.124 [2024-07-16 00:57:40.693669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.693694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.124 [2024-07-16 00:57:40.693715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.693861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.693884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.693892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.124 [2024-07-16 00:57:40.693908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:06.124 [2024-07-16 00:57:40.693925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.693941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.693951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.124 [2024-07-16 00:57:40.693972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.694123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.694135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.694142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.124 [2024-07-16 00:57:40.694156] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:06.124 [2024-07-16 00:57:40.694164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:06.124 [2024-07-16 00:57:40.694177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:06.124 [2024-07-16 00:57:40.694287] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:06.124 [2024-07-16 00:57:40.694295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:06.124 [2024-07-16 00:57:40.694307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.694330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.124 [2024-07-16 00:57:40.694351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.694536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.694548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.694555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.124 [2024-07-16 00:57:40.694570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:06.124 [2024-07-16 00:57:40.694586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.694613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.124 [2024-07-16 00:57:40.694633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.694773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.694786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.694793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.124 [2024-07-16 00:57:40.694807] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:06.124 [2024-07-16 00:57:40.694816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:06.124 [2024-07-16 00:57:40.694829] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:06.124 [2024-07-16 00:57:40.694842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:06.124 [2024-07-16 00:57:40.694856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.694864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.124 [2024-07-16 00:57:40.694883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.124 [2024-07-16 00:57:40.694907] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.124 [2024-07-16 00:57:40.695125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.124 [2024-07-16 00:57:40.695140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.124 [2024-07-16 00:57:40.695147] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.695154] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=4096, cccid=0 00:21:06.124 [2024-07-16 00:57:40.695161] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54b540) on tqpair(0x4eb6e0): expected_datao=0, payload_size=4096 00:21:06.124 [2024-07-16 00:57:40.695169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.695191] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.695201] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.124 [2024-07-16 00:57:40.736046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.124 [2024-07-16 00:57:40.736065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.124 [2024-07-16 00:57:40.736073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.125 [2024-07-16 00:57:40.736091] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:06.125 [2024-07-16 00:57:40.736101] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:06.125 [2024-07-16 00:57:40.736109] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:06.125 [2024-07-16 00:57:40.736116] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:06.125 [2024-07-16 00:57:40.736123] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:06.125 [2024-07-16 00:57:40.736132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.736146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.736163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.736207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.125 [2024-07-16 00:57:40.736231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.125 [2024-07-16 00:57:40.736374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.125 [2024-07-16 00:57:40.736390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.125 [2024-07-16 00:57:40.736398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.125 [2024-07-16 00:57:40.736416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.736442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.125 [2024-07-16 00:57:40.736453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.736478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.125 [2024-07-16 00:57:40.736488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.736526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.125 [2024-07-16 00:57:40.736536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.736557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.125 [2024-07-16 00:57:40.736566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.736585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.736598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.736616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.125 [2024-07-16 00:57:40.736638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b540, cid 0, qid 0 00:21:06.125 [2024-07-16 00:57:40.736664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b6c0, cid 1, qid 0 00:21:06.125 [2024-07-16 00:57:40.736672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b840, cid 2, qid 0 00:21:06.125 [2024-07-16 00:57:40.736680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.125 [2024-07-16 00:57:40.736688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bb40, cid 4, qid 0 00:21:06.125 [2024-07-16 00:57:40.736887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.125 [2024-07-16 00:57:40.736904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.125 [2024-07-16 00:57:40.736912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bb40) on tqpair=0x4eb6e0 00:21:06.125 [2024-07-16 00:57:40.736927] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:06.125 [2024-07-16 00:57:40.736936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.736954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.736967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.736978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.736992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.737003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.125 [2024-07-16 00:57:40.737024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bb40, cid 4, qid 0 00:21:06.125 [2024-07-16 00:57:40.737212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.125 [2024-07-16 00:57:40.737227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.125 [2024-07-16 00:57:40.737234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.737241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bb40) on tqpair=0x4eb6e0 00:21:06.125 [2024-07-16 00:57:40.737311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.737332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.737374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.737382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.737393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.125 [2024-07-16 00:57:40.737426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bb40, cid 4, qid 0 00:21:06.125 [2024-07-16 00:57:40.737627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.125 [2024-07-16 00:57:40.737643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.125 [2024-07-16 00:57:40.737650] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.737656] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=4096, cccid=4 00:21:06.125 [2024-07-16 00:57:40.737664] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54bb40) on tqpair(0x4eb6e0): expected_datao=0, payload_size=4096 00:21:06.125 [2024-07-16 00:57:40.737671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.737692] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.737701] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.781886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.125 [2024-07-16 00:57:40.781905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.125 [2024-07-16 00:57:40.781912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.781918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bb40) on tqpair=0x4eb6e0 00:21:06.125 [2024-07-16 00:57:40.781939] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:06.125 [2024-07-16 00:57:40.781965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.781997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.782013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.782021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.782033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.125 [2024-07-16 00:57:40.782056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bb40, cid 4, qid 0 00:21:06.125 [2024-07-16 00:57:40.782257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.125 [2024-07-16 00:57:40.782269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.125 [2024-07-16 00:57:40.782276] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.782282] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=4096, cccid=4 00:21:06.125 [2024-07-16 00:57:40.782290] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54bb40) on tqpair(0x4eb6e0): expected_datao=0, payload_size=4096 00:21:06.125 [2024-07-16 00:57:40.782297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.782308] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.782315] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.782365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.125 [2024-07-16 00:57:40.782377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.125 [2024-07-16 00:57:40.782383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.782390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bb40) on tqpair=0x4eb6e0 00:21:06.125 [2024-07-16 00:57:40.782411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.782430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:06.125 [2024-07-16 00:57:40.782445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.125 [2024-07-16 00:57:40.782453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4eb6e0) 00:21:06.125 [2024-07-16 00:57:40.782463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.125 [2024-07-16 00:57:40.782485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bb40, cid 4, qid 0 00:21:06.125 [2024-07-16 00:57:40.782646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.125 [2024-07-16 00:57:40.782658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.125 [2024-07-16 00:57:40.782665] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.782671] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=4096, cccid=4 00:21:06.126 [2024-07-16 00:57:40.782679] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54bb40) on tqpair(0x4eb6e0): expected_datao=0, payload_size=4096 00:21:06.126 [2024-07-16 00:57:40.782686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.782716] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.782725] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.126 [2024-07-16 00:57:40.823060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.126 [2024-07-16 00:57:40.823069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bb40) on tqpair=0x4eb6e0 00:21:06.126 [2024-07-16 00:57:40.823090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:06.126 [2024-07-16 00:57:40.823105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:06.126 [2024-07-16 00:57:40.823120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:06.126 [2024-07-16 00:57:40.823132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:06.126 [2024-07-16 00:57:40.823141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:06.126 [2024-07-16 00:57:40.823150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:06.126 [2024-07-16 00:57:40.823158] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:06.126 [2024-07-16 00:57:40.823166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:06.126 [2024-07-16 00:57:40.823174] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:06.126 [2024-07-16 00:57:40.823193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.823213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.823224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.823247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.126 [2024-07-16 00:57:40.823274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bb40, cid 4, qid 0 00:21:06.126 [2024-07-16 00:57:40.823286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bcc0, cid 5, qid 0 00:21:06.126 [2024-07-16 00:57:40.823444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.126 [2024-07-16 00:57:40.823456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.126 [2024-07-16 00:57:40.823462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bb40) on tqpair=0x4eb6e0 00:21:06.126 [2024-07-16 00:57:40.823480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.126 [2024-07-16 00:57:40.823489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.126 [2024-07-16 00:57:40.823495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bcc0) on tqpair=0x4eb6e0 00:21:06.126 [2024-07-16 00:57:40.823517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.823537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.823576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bcc0, cid 5, qid 0 00:21:06.126 [2024-07-16 00:57:40.823795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.126 [2024-07-16 00:57:40.823808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.126 [2024-07-16 00:57:40.823815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bcc0) on tqpair=0x4eb6e0 00:21:06.126 [2024-07-16 00:57:40.823837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.823846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.823857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.823884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bcc0, cid 5, qid 0 00:21:06.126 [2024-07-16 00:57:40.824026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.126 [2024-07-16 00:57:40.824042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.126 [2024-07-16 00:57:40.824048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bcc0) on tqpair=0x4eb6e0 00:21:06.126 [2024-07-16 00:57:40.824071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.824091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.824112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bcc0, cid 5, qid 0 00:21:06.126 [2024-07-16 00:57:40.824260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.126 [2024-07-16 00:57:40.824272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.126 [2024-07-16 00:57:40.824279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bcc0) on tqpair=0x4eb6e0 00:21:06.126 [2024-07-16 00:57:40.824309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.824331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.824344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.824361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.824373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.824404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.824416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4eb6e0) 00:21:06.126 [2024-07-16 00:57:40.824433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.126 [2024-07-16 00:57:40.824458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bcc0, cid 5, qid 0 00:21:06.126 [2024-07-16 00:57:40.824485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bb40, cid 4, qid 0 00:21:06.126 [2024-07-16 00:57:40.824493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54be40, cid 6, qid 0 00:21:06.126 [2024-07-16 00:57:40.824500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bfc0, cid 7, qid 0 00:21:06.126 [2024-07-16 00:57:40.824833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.126 [2024-07-16 00:57:40.824849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.126 [2024-07-16 00:57:40.824856] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.824862] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=8192, cccid=5 00:21:06.126 [2024-07-16 00:57:40.824870] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54bcc0) on tqpair(0x4eb6e0): expected_datao=0, payload_size=8192 00:21:06.126 [2024-07-16 00:57:40.828886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828903] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.126 [2024-07-16 00:57:40.828928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.126 [2024-07-16 00:57:40.828934] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828940] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=512, cccid=4 00:21:06.126 [2024-07-16 00:57:40.828947] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54bb40) on tqpair(0x4eb6e0): expected_datao=0, payload_size=512 00:21:06.126 [2024-07-16 00:57:40.828954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828963] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828970] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.126 [2024-07-16 00:57:40.828986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.126 [2024-07-16 00:57:40.828993] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.828999] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=512, cccid=6 00:21:06.126 [2024-07-16 00:57:40.829006] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54be40) on tqpair(0x4eb6e0): expected_datao=0, payload_size=512 00:21:06.126 [2024-07-16 00:57:40.829013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.829021] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.829028] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.829036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:06.126 [2024-07-16 00:57:40.829045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:06.126 [2024-07-16 00:57:40.829051] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.829057] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4eb6e0): datao=0, datal=4096, cccid=7 00:21:06.126 [2024-07-16 00:57:40.829064] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54bfc0) on tqpair(0x4eb6e0): expected_datao=0, payload_size=4096 00:21:06.126 [2024-07-16 00:57:40.829071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.829080] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.829087] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:06.126 [2024-07-16 00:57:40.829098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.126 [2024-07-16 00:57:40.829107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.127 [2024-07-16 00:57:40.829117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.127 [2024-07-16 00:57:40.829124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bcc0) on tqpair=0x4eb6e0 00:21:06.127 [2024-07-16 00:57:40.829142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.127 [2024-07-16 00:57:40.829153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.127 [2024-07-16 00:57:40.829159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.127 [2024-07-16 00:57:40.829166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bb40) on tqpair=0x4eb6e0 00:21:06.127 [2024-07-16 00:57:40.829180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.127 [2024-07-16 00:57:40.829205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.127 [2024-07-16 00:57:40.829211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.127 [2024-07-16 00:57:40.829217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54be40) on tqpair=0x4eb6e0 00:21:06.127 [2024-07-16 00:57:40.829227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.127 [2024-07-16 00:57:40.829236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.127 [2024-07-16 00:57:40.829241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.127 [2024-07-16 00:57:40.829248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bfc0) on tqpair=0x4eb6e0 00:21:06.127 ===================================================== 00:21:06.127 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.127 ===================================================== 00:21:06.127 Controller Capabilities/Features 00:21:06.127 ================================ 00:21:06.127 Vendor ID: 8086 00:21:06.127 Subsystem Vendor ID: 8086 00:21:06.127 Serial Number: SPDK00000000000001 00:21:06.127 Model Number: SPDK bdev Controller 00:21:06.127 Firmware Version: 24.09 00:21:06.127 Recommended Arb Burst: 6 00:21:06.127 IEEE OUI Identifier: e4 d2 5c 00:21:06.127 Multi-path I/O 00:21:06.127 May have multiple subsystem ports: Yes 00:21:06.127 May have multiple controllers: Yes 00:21:06.127 Associated with SR-IOV VF: No 00:21:06.127 Max Data Transfer Size: 131072 00:21:06.127 Max Number of Namespaces: 32 00:21:06.127 Max Number of I/O Queues: 127 00:21:06.127 NVMe Specification Version (VS): 1.3 00:21:06.127 NVMe Specification Version (Identify): 1.3 00:21:06.127 Maximum Queue Entries: 128 00:21:06.127 Contiguous Queues Required: Yes 00:21:06.127 Arbitration Mechanisms Supported 00:21:06.127 Weighted Round Robin: Not Supported 00:21:06.127 Vendor Specific: Not Supported 00:21:06.127 Reset Timeout: 15000 ms 00:21:06.127 Doorbell Stride: 4 bytes 00:21:06.127 NVM Subsystem Reset: Not Supported 00:21:06.127 Command Sets Supported 00:21:06.127 NVM Command Set: Supported 00:21:06.127 Boot Partition: Not Supported 00:21:06.127 Memory Page Size Minimum: 4096 bytes 00:21:06.127 Memory Page Size Maximum: 4096 bytes 00:21:06.127 Persistent Memory Region: Not Supported 00:21:06.127 Optional Asynchronous Events Supported 00:21:06.127 Namespace Attribute Notices: Supported 00:21:06.127 Firmware Activation Notices: Not Supported 00:21:06.127 ANA Change Notices: Not Supported 00:21:06.127 PLE Aggregate Log Change Notices: Not Supported 00:21:06.127 LBA Status Info Alert Notices: Not Supported 00:21:06.127 EGE Aggregate Log Change Notices: Not Supported 00:21:06.127 Normal NVM Subsystem Shutdown event: Not Supported 00:21:06.127 Zone Descriptor Change Notices: Not Supported 00:21:06.127 Discovery Log Change Notices: Not Supported 00:21:06.127 Controller Attributes 00:21:06.127 128-bit Host Identifier: Supported 00:21:06.127 Non-Operational Permissive Mode: Not Supported 00:21:06.127 NVM Sets: Not Supported 00:21:06.127 Read Recovery Levels: Not Supported 00:21:06.127 Endurance Groups: Not Supported 00:21:06.127 Predictable Latency Mode: Not Supported 00:21:06.127 Traffic Based Keep ALive: Not Supported 00:21:06.127 Namespace Granularity: Not Supported 00:21:06.127 SQ Associations: Not Supported 00:21:06.127 UUID List: Not Supported 00:21:06.127 Multi-Domain Subsystem: Not Supported 00:21:06.127 Fixed Capacity Management: Not Supported 00:21:06.127 Variable Capacity Management: Not Supported 00:21:06.127 Delete Endurance Group: Not Supported 00:21:06.127 Delete NVM Set: Not Supported 00:21:06.127 Extended LBA Formats Supported: Not Supported 00:21:06.127 Flexible Data Placement Supported: Not Supported 00:21:06.127 00:21:06.127 Controller Memory Buffer Support 00:21:06.127 ================================ 00:21:06.127 Supported: No 00:21:06.127 00:21:06.127 Persistent Memory Region Support 00:21:06.127 ================================ 00:21:06.127 Supported: No 00:21:06.127 00:21:06.127 Admin Command Set Attributes 00:21:06.127 ============================ 00:21:06.127 Security Send/Receive: Not Supported 00:21:06.127 Format NVM: Not Supported 00:21:06.127 Firmware Activate/Download: Not Supported 00:21:06.127 Namespace Management: Not Supported 00:21:06.127 Device Self-Test: Not Supported 00:21:06.127 Directives: Not Supported 00:21:06.127 NVMe-MI: Not Supported 00:21:06.127 Virtualization Management: Not Supported 00:21:06.127 Doorbell Buffer Config: Not Supported 00:21:06.127 Get LBA Status Capability: Not Supported 00:21:06.127 Command & Feature Lockdown Capability: Not Supported 00:21:06.127 Abort Command Limit: 4 00:21:06.127 Async Event Request Limit: 4 00:21:06.127 Number of Firmware Slots: N/A 00:21:06.127 Firmware Slot 1 Read-Only: N/A 00:21:06.127 Firmware Activation Without Reset: N/A 00:21:06.127 Multiple Update Detection Support: N/A 00:21:06.127 Firmware Update Granularity: No Information Provided 00:21:06.127 Per-Namespace SMART Log: No 00:21:06.127 Asymmetric Namespace Access Log Page: Not Supported 00:21:06.127 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:06.127 Command Effects Log Page: Supported 00:21:06.127 Get Log Page Extended Data: Supported 00:21:06.127 Telemetry Log Pages: Not Supported 00:21:06.127 Persistent Event Log Pages: Not Supported 00:21:06.127 Supported Log Pages Log Page: May Support 00:21:06.127 Commands Supported & Effects Log Page: Not Supported 00:21:06.127 Feature Identifiers & Effects Log Page:May Support 00:21:06.127 NVMe-MI Commands & Effects Log Page: May Support 00:21:06.127 Data Area 4 for Telemetry Log: Not Supported 00:21:06.127 Error Log Page Entries Supported: 128 00:21:06.127 Keep Alive: Supported 00:21:06.127 Keep Alive Granularity: 10000 ms 00:21:06.127 00:21:06.127 NVM Command Set Attributes 00:21:06.127 ========================== 00:21:06.127 Submission Queue Entry Size 00:21:06.127 Max: 64 00:21:06.127 Min: 64 00:21:06.127 Completion Queue Entry Size 00:21:06.127 Max: 16 00:21:06.127 Min: 16 00:21:06.127 Number of Namespaces: 32 00:21:06.127 Compare Command: Supported 00:21:06.127 Write Uncorrectable Command: Not Supported 00:21:06.127 Dataset Management Command: Supported 00:21:06.127 Write Zeroes Command: Supported 00:21:06.127 Set Features Save Field: Not Supported 00:21:06.127 Reservations: Supported 00:21:06.127 Timestamp: Not Supported 00:21:06.127 Copy: Supported 00:21:06.127 Volatile Write Cache: Present 00:21:06.127 Atomic Write Unit (Normal): 1 00:21:06.127 Atomic Write Unit (PFail): 1 00:21:06.127 Atomic Compare & Write Unit: 1 00:21:06.127 Fused Compare & Write: Supported 00:21:06.127 Scatter-Gather List 00:21:06.127 SGL Command Set: Supported 00:21:06.127 SGL Keyed: Supported 00:21:06.127 SGL Bit Bucket Descriptor: Not Supported 00:21:06.127 SGL Metadata Pointer: Not Supported 00:21:06.127 Oversized SGL: Not Supported 00:21:06.127 SGL Metadata Address: Not Supported 00:21:06.127 SGL Offset: Supported 00:21:06.127 Transport SGL Data Block: Not Supported 00:21:06.127 Replay Protected Memory Block: Not Supported 00:21:06.127 00:21:06.127 Firmware Slot Information 00:21:06.127 ========================= 00:21:06.127 Active slot: 1 00:21:06.127 Slot 1 Firmware Revision: 24.09 00:21:06.127 00:21:06.127 00:21:06.127 Commands Supported and Effects 00:21:06.127 ============================== 00:21:06.127 Admin Commands 00:21:06.127 -------------- 00:21:06.127 Get Log Page (02h): Supported 00:21:06.127 Identify (06h): Supported 00:21:06.127 Abort (08h): Supported 00:21:06.127 Set Features (09h): Supported 00:21:06.127 Get Features (0Ah): Supported 00:21:06.127 Asynchronous Event Request (0Ch): Supported 00:21:06.127 Keep Alive (18h): Supported 00:21:06.127 I/O Commands 00:21:06.127 ------------ 00:21:06.127 Flush (00h): Supported LBA-Change 00:21:06.127 Write (01h): Supported LBA-Change 00:21:06.127 Read (02h): Supported 00:21:06.127 Compare (05h): Supported 00:21:06.127 Write Zeroes (08h): Supported LBA-Change 00:21:06.127 Dataset Management (09h): Supported LBA-Change 00:21:06.127 Copy (19h): Supported LBA-Change 00:21:06.127 00:21:06.127 Error Log 00:21:06.127 ========= 00:21:06.127 00:21:06.127 Arbitration 00:21:06.127 =========== 00:21:06.127 Arbitration Burst: 1 00:21:06.127 00:21:06.127 Power Management 00:21:06.127 ================ 00:21:06.127 Number of Power States: 1 00:21:06.127 Current Power State: Power State #0 00:21:06.127 Power State #0: 00:21:06.127 Max Power: 0.00 W 00:21:06.127 Non-Operational State: Operational 00:21:06.127 Entry Latency: Not Reported 00:21:06.127 Exit Latency: Not Reported 00:21:06.127 Relative Read Throughput: 0 00:21:06.127 Relative Read Latency: 0 00:21:06.127 Relative Write Throughput: 0 00:21:06.127 Relative Write Latency: 0 00:21:06.127 Idle Power: Not Reported 00:21:06.127 Active Power: Not Reported 00:21:06.127 Non-Operational Permissive Mode: Not Supported 00:21:06.127 00:21:06.128 Health Information 00:21:06.128 ================== 00:21:06.128 Critical Warnings: 00:21:06.128 Available Spare Space: OK 00:21:06.128 Temperature: OK 00:21:06.128 Device Reliability: OK 00:21:06.128 Read Only: No 00:21:06.128 Volatile Memory Backup: OK 00:21:06.128 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:06.128 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:06.128 Available Spare: 0% 00:21:06.128 Available Spare Threshold: 0% 00:21:06.128 Life Percentage Used:[2024-07-16 00:57:40.829374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.829386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4eb6e0) 00:21:06.128 [2024-07-16 00:57:40.829398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.128 [2024-07-16 00:57:40.829421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54bfc0, cid 7, qid 0 00:21:06.128 [2024-07-16 00:57:40.829627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.128 [2024-07-16 00:57:40.829639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.128 [2024-07-16 00:57:40.829646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.829653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54bfc0) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.829698] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:06.128 [2024-07-16 00:57:40.829718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b540) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.829728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.128 [2024-07-16 00:57:40.829751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b6c0) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.829759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.128 [2024-07-16 00:57:40.829767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b840) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.829774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.128 [2024-07-16 00:57:40.829782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.829790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.128 [2024-07-16 00:57:40.829802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.829810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.829816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.128 [2024-07-16 00:57:40.829826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.128 [2024-07-16 00:57:40.829851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.128 [2024-07-16 00:57:40.830043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.128 [2024-07-16 00:57:40.830059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.128 [2024-07-16 00:57:40.830066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.830084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.128 [2024-07-16 00:57:40.830109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.128 [2024-07-16 00:57:40.830136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.128 [2024-07-16 00:57:40.830295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.128 [2024-07-16 00:57:40.830310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.128 [2024-07-16 00:57:40.830317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.830332] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:06.128 [2024-07-16 00:57:40.830339] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:06.128 [2024-07-16 00:57:40.830355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.128 [2024-07-16 00:57:40.830381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.128 [2024-07-16 00:57:40.830402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.128 [2024-07-16 00:57:40.830570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.128 [2024-07-16 00:57:40.830583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.128 [2024-07-16 00:57:40.830589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.128 [2024-07-16 00:57:40.830612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.128 [2024-07-16 00:57:40.830628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.830638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.830659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.830815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.830827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.830834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.830841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.830857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.830866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.830872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.830897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.830920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.831078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.831091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.831098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.831120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.831147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.831167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.831304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.831316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.831323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.831346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.831372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.831392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.831532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.831547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.831554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.831577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.831603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.831624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.831778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.831790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.831797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.831819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.831835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.831845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.831869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.832033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.832046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.832052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.832075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.832101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.832122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.832266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.832281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.832288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.832310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832327] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.832337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.832358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.832495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.129 [2024-07-16 00:57:40.832507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.129 [2024-07-16 00:57:40.832514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.129 [2024-07-16 00:57:40.832536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.129 [2024-07-16 00:57:40.832552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.129 [2024-07-16 00:57:40.832563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.129 [2024-07-16 00:57:40.832582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.129 [2024-07-16 00:57:40.832721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.130 [2024-07-16 00:57:40.832733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.130 [2024-07-16 00:57:40.832740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.130 [2024-07-16 00:57:40.832746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.130 [2024-07-16 00:57:40.832762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.130 [2024-07-16 00:57:40.832771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.130 [2024-07-16 00:57:40.832778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.130 [2024-07-16 00:57:40.832788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.130 [2024-07-16 00:57:40.832808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.130 [2024-07-16 00:57:40.836903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.130 [2024-07-16 00:57:40.836919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.130 [2024-07-16 00:57:40.836926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.130 [2024-07-16 00:57:40.836933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.130 [2024-07-16 00:57:40.836964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:06.130 [2024-07-16 00:57:40.836975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:06.130 [2024-07-16 00:57:40.836982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4eb6e0) 00:21:06.130 [2024-07-16 00:57:40.836992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.130 [2024-07-16 00:57:40.837014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54b9c0, cid 3, qid 0 00:21:06.130 [2024-07-16 00:57:40.837191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:06.130 [2024-07-16 00:57:40.837204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:06.130 [2024-07-16 00:57:40.837211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:06.130 [2024-07-16 00:57:40.837217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54b9c0) on tqpair=0x4eb6e0 00:21:06.130 [2024-07-16 00:57:40.837230] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:21:06.130 0% 00:21:06.130 Data Units Read: 0 00:21:06.130 Data Units Written: 0 00:21:06.130 Host Read Commands: 0 00:21:06.130 Host Write Commands: 0 00:21:06.130 Controller Busy Time: 0 minutes 00:21:06.130 Power Cycles: 0 00:21:06.130 Power On Hours: 0 hours 00:21:06.130 Unsafe Shutdowns: 0 00:21:06.130 Unrecoverable Media Errors: 0 00:21:06.130 Lifetime Error Log Entries: 0 00:21:06.130 Warning Temperature Time: 0 minutes 00:21:06.130 Critical Temperature Time: 0 minutes 00:21:06.130 00:21:06.130 Number of Queues 00:21:06.130 ================ 00:21:06.130 Number of I/O Submission Queues: 127 00:21:06.130 Number of I/O Completion Queues: 127 00:21:06.130 00:21:06.130 Active Namespaces 00:21:06.130 ================= 00:21:06.130 Namespace ID:1 00:21:06.130 Error Recovery Timeout: Unlimited 00:21:06.130 Command Set Identifier: NVM (00h) 00:21:06.130 Deallocate: Supported 00:21:06.130 Deallocated/Unwritten Error: Not Supported 00:21:06.130 Deallocated Read Value: Unknown 00:21:06.130 Deallocate in Write Zeroes: Not Supported 00:21:06.130 Deallocated Guard Field: 0xFFFF 00:21:06.130 Flush: Supported 00:21:06.130 Reservation: Supported 00:21:06.130 Namespace Sharing Capabilities: Multiple Controllers 00:21:06.130 Size (in LBAs): 131072 (0GiB) 00:21:06.130 Capacity (in LBAs): 131072 (0GiB) 00:21:06.130 Utilization (in LBAs): 131072 (0GiB) 00:21:06.130 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:06.130 EUI64: ABCDEF0123456789 00:21:06.130 UUID: 9ea34c13-7cc3-4983-8361-bed65693391f 00:21:06.130 Thin Provisioning: Not Supported 00:21:06.130 Per-NS Atomic Units: Yes 00:21:06.130 Atomic Boundary Size (Normal): 0 00:21:06.130 Atomic Boundary Size (PFail): 0 00:21:06.130 Atomic Boundary Offset: 0 00:21:06.130 Maximum Single Source Range Length: 65535 00:21:06.130 Maximum Copy Length: 65535 00:21:06.130 Maximum Source Range Count: 1 00:21:06.130 NGUID/EUI64 Never Reused: No 00:21:06.130 Namespace Write Protected: No 00:21:06.130 Number of LBA Formats: 1 00:21:06.130 Current LBA Format: LBA Format #00 00:21:06.130 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:06.130 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.130 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.130 rmmod nvme_tcp 00:21:06.388 rmmod nvme_fabrics 00:21:06.388 rmmod nvme_keyring 00:21:06.388 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.388 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:06.388 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:06.388 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2698214 ']' 00:21:06.388 00:57:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2698214 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2698214 ']' 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2698214 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2698214 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2698214' 00:21:06.389 killing process with pid 2698214 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2698214 00:21:06.389 00:57:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2698214 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.648 00:57:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.554 00:57:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.554 00:21:08.554 real 0m6.156s 00:21:08.554 user 0m7.544s 00:21:08.554 sys 0m1.862s 00:21:08.554 00:57:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:08.554 00:57:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.554 ************************************ 00:21:08.554 END TEST nvmf_identify 00:21:08.554 ************************************ 00:21:08.554 00:57:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:08.554 00:57:43 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:08.554 00:57:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:08.554 00:57:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.554 00:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.826 ************************************ 00:21:08.826 START TEST nvmf_perf 00:21:08.826 ************************************ 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:08.826 * Looking for test storage... 00:21:08.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.826 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.827 00:57:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:10.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:10.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:10.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:10.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:10.733 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:21:10.992 00:21:10.992 --- 10.0.0.2 ping statistics --- 00:21:10.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.992 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:21:10.992 00:21:10.992 --- 10.0.0.1 ping statistics --- 00:21:10.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.992 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2700305 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2700305 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2700305 ']' 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.992 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:10.992 [2024-07-16 00:57:45.604212] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:21:10.992 [2024-07-16 00:57:45.604280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.992 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.992 [2024-07-16 00:57:45.669581] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.251 [2024-07-16 00:57:45.786474] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.251 [2024-07-16 00:57:45.786536] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.251 [2024-07-16 00:57:45.786553] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.251 [2024-07-16 00:57:45.786567] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.251 [2024-07-16 00:57:45.786578] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.251 [2024-07-16 00:57:45.786664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.251 [2024-07-16 00:57:45.786732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.251 [2024-07-16 00:57:45.786825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.251 [2024-07-16 00:57:45.786827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:11.251 00:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:14.539 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:14.539 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:14.539 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:14.539 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:15.105 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:15.105 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:15.105 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:15.105 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:15.105 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.105 [2024-07-16 00:57:49.807219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.105 00:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:15.363 00:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:15.363 00:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:15.620 00:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:15.620 00:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:15.878 00:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.135 [2024-07-16 00:57:50.802993] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.135 00:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:16.394 00:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:16.394 00:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:16.394 00:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:16.394 00:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:17.768 Initializing NVMe Controllers 00:21:17.768 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:17.769 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:17.769 Initialization complete. Launching workers. 00:21:17.769 ======================================================== 00:21:17.769 Latency(us) 00:21:17.769 Device Information : IOPS MiB/s Average min max 00:21:17.769 PCIE (0000:88:00.0) NSID 1 from core 0: 84277.70 329.21 377.55 11.12 6813.00 00:21:17.769 ======================================================== 00:21:17.769 Total : 84277.70 329.21 377.55 11.12 6813.00 00:21:17.769 00:21:17.769 00:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:17.769 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.170 Initializing NVMe Controllers 00:21:19.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:19.170 Initialization complete. Launching workers. 00:21:19.170 ======================================================== 00:21:19.170 Latency(us) 00:21:19.170 Device Information : IOPS MiB/s Average min max 00:21:19.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 66.00 0.26 15582.20 238.43 45242.03 00:21:19.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 21259.05 6998.46 47901.48 00:21:19.170 ======================================================== 00:21:19.170 Total : 114.00 0.45 17972.45 238.43 47901.48 00:21:19.170 00:21:19.170 00:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.170 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.552 Initializing NVMe Controllers 00:21:20.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:20.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:20.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:20.552 Initialization complete. Launching workers. 00:21:20.552 ======================================================== 00:21:20.552 Latency(us) 00:21:20.552 Device Information : IOPS MiB/s Average min max 00:21:20.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8167.73 31.91 3918.27 569.14 7971.77 00:21:20.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3762.03 14.70 8505.89 5079.22 18193.78 00:21:20.552 ======================================================== 00:21:20.552 Total : 11929.76 46.60 5364.97 569.14 18193.78 00:21:20.552 00:21:20.552 00:57:54 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:20.552 00:57:54 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:20.552 00:57:54 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:20.552 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.083 Initializing NVMe Controllers 00:21:23.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.083 Controller IO queue size 128, less than required. 00:21:23.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.083 Controller IO queue size 128, less than required. 00:21:23.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:23.083 Initialization complete. Launching workers. 00:21:23.083 ======================================================== 00:21:23.083 Latency(us) 00:21:23.083 Device Information : IOPS MiB/s Average min max 00:21:23.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 775.50 193.87 171835.02 85077.35 273676.57 00:21:23.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.50 149.87 224654.87 100023.47 321690.31 00:21:23.083 ======================================================== 00:21:23.083 Total : 1374.99 343.75 194864.48 85077.35 321690.31 00:21:23.083 00:21:23.083 00:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:23.083 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.083 No valid NVMe controllers or AIO or URING devices found 00:21:23.083 Initializing NVMe Controllers 00:21:23.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.083 Controller IO queue size 128, less than required. 00:21:23.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.083 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:23.083 Controller IO queue size 128, less than required. 00:21:23.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.083 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:23.083 WARNING: Some requested NVMe devices were skipped 00:21:23.083 00:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:23.083 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.623 Initializing NVMe Controllers 00:21:25.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.623 Controller IO queue size 128, less than required. 00:21:25.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.623 Controller IO queue size 128, less than required. 00:21:25.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:25.623 Initialization complete. Launching workers. 00:21:25.623 00:21:25.623 ==================== 00:21:25.623 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:25.623 TCP transport: 00:21:25.623 polls: 26805 00:21:25.623 idle_polls: 16214 00:21:25.623 sock_completions: 10591 00:21:25.623 nvme_completions: 3539 00:21:25.623 submitted_requests: 5290 00:21:25.623 queued_requests: 1 00:21:25.623 00:21:25.623 ==================== 00:21:25.623 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:25.623 TCP transport: 00:21:25.623 polls: 28304 00:21:25.623 idle_polls: 11250 00:21:25.623 sock_completions: 17054 00:21:25.623 nvme_completions: 3641 00:21:25.623 submitted_requests: 5450 00:21:25.623 queued_requests: 1 00:21:25.623 ======================================================== 00:21:25.623 Latency(us) 00:21:25.623 Device Information : IOPS MiB/s Average min max 00:21:25.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 884.49 221.12 147715.32 87467.89 221968.24 00:21:25.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 909.99 227.50 145533.72 68560.82 191347.57 00:21:25.623 ======================================================== 00:21:25.623 Total : 1794.49 448.62 146609.02 68560.82 221968.24 00:21:25.623 00:21:25.623 00:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:25.623 00:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.880 00:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:25.880 00:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:25.880 00:58:00 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:25.880 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.881 rmmod nvme_tcp 00:21:25.881 rmmod nvme_fabrics 00:21:25.881 rmmod nvme_keyring 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2700305 ']' 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2700305 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2700305 ']' 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2700305 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2700305 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2700305' 00:21:25.881 killing process with pid 2700305 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2700305 00:21:25.881 00:58:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2700305 00:21:27.783 00:58:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.784 00:58:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.784 00:58:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.784 00:58:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.784 00:58:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.784 00:58:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.784 00:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.784 00:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.687 00:58:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:29.687 00:21:29.687 real 0m20.911s 00:21:29.687 user 1m3.466s 00:21:29.687 sys 0m5.028s 00:21:29.687 00:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.688 00:58:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:29.688 ************************************ 00:21:29.688 END TEST nvmf_perf 00:21:29.688 ************************************ 00:21:29.688 00:58:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:29.688 00:58:04 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:29.688 00:58:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:29.688 00:58:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.688 00:58:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:29.688 ************************************ 00:21:29.688 START TEST nvmf_fio_host 00:21:29.688 ************************************ 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:29.688 * Looking for test storage... 00:21:29.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:29.688 00:58:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.593 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:31.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:31.594 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:31.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:31.594 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.594 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:21:31.874 00:21:31.874 --- 10.0.0.2 ping statistics --- 00:21:31.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.874 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:21:31.874 00:21:31.874 --- 10.0.0.1 ping statistics --- 00:21:31.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.874 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2704165 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2704165 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2704165 ']' 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.874 00:58:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.874 [2024-07-16 00:58:06.482090] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:21:31.874 [2024-07-16 00:58:06.482189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.874 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.874 [2024-07-16 00:58:06.551293] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.133 [2024-07-16 00:58:06.668317] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.133 [2024-07-16 00:58:06.668378] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.133 [2024-07-16 00:58:06.668404] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.133 [2024-07-16 00:58:06.668417] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.133 [2024-07-16 00:58:06.668429] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.133 [2024-07-16 00:58:06.668519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.133 [2024-07-16 00:58:06.668587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.133 [2024-07-16 00:58:06.668684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.133 [2024-07-16 00:58:06.668687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.701 00:58:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.701 00:58:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:32.701 00:58:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:32.959 [2024-07-16 00:58:07.660516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.959 00:58:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:32.959 00:58:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.959 00:58:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.959 00:58:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:33.218 Malloc1 00:21:33.218 00:58:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.477 00:58:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:33.735 00:58:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.993 [2024-07-16 00:58:08.700578] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.994 00:58:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:34.252 00:58:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:34.510 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:34.510 fio-3.35 00:21:34.510 Starting 1 thread 00:21:34.510 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.071 00:21:37.071 test: (groupid=0, jobs=1): err= 0: pid=2704633: Tue Jul 16 00:58:11 2024 00:21:37.071 read: IOPS=8743, BW=34.2MiB/s (35.8MB/s)(68.5MiB/2007msec) 00:21:37.071 slat (nsec): min=1937, max=101599, avg=2492.28, stdev=1458.71 00:21:37.071 clat (usec): min=1967, max=14621, avg=8122.02, stdev=621.42 00:21:37.072 lat (usec): min=1988, max=14624, avg=8124.51, stdev=621.32 00:21:37.072 clat percentiles (usec): 00:21:37.072 | 1.00th=[ 6783], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7635], 00:21:37.072 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8291], 00:21:37.072 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 8979], 00:21:37.072 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[11994], 99.95th=[13304], 00:21:37.072 | 99.99th=[14615] 00:21:37.072 bw ( KiB/s): min=34696, max=35248, per=99.97%, avg=34962.00, stdev=251.49, samples=4 00:21:37.072 iops : min= 8674, max= 8812, avg=8740.50, stdev=62.87, samples=4 00:21:37.072 write: IOPS=8741, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec); 0 zone resets 00:21:37.072 slat (usec): min=2, max=102, avg= 2.64, stdev= 1.33 00:21:37.072 clat (usec): min=1772, max=13202, avg=6485.93, stdev=554.15 00:21:37.072 lat (usec): min=1778, max=13204, avg=6488.57, stdev=554.10 00:21:37.072 clat percentiles (usec): 00:21:37.072 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:21:37.072 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:21:37.072 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:21:37.072 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[11338], 99.95th=[11863], 00:21:37.072 | 99.99th=[13173] 00:21:37.072 bw ( KiB/s): min=34496, max=35624, per=100.00%, avg=34970.00, stdev=474.97, samples=4 00:21:37.072 iops : min= 8624, max= 8906, avg=8742.50, stdev=118.74, samples=4 00:21:37.072 lat (msec) : 2=0.02%, 4=0.10%, 10=99.69%, 20=0.19% 00:21:37.072 cpu : usr=53.99%, sys=38.33%, ctx=55, majf=0, minf=33 00:21:37.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:37.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:37.072 issued rwts: total=17548,17545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:37.072 00:21:37.072 Run status group 0 (all jobs): 00:21:37.072 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.9MB), run=2007-2007msec 00:21:37.072 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.9MB), run=2007-2007msec 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:37.072 00:58:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:37.072 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:37.072 fio-3.35 00:21:37.072 Starting 1 thread 00:21:37.072 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.601 00:21:39.601 test: (groupid=0, jobs=1): err= 0: pid=2704965: Tue Jul 16 00:58:14 2024 00:21:39.601 read: IOPS=7414, BW=116MiB/s (121MB/s)(233MiB/2008msec) 00:21:39.601 slat (nsec): min=2810, max=97242, avg=3755.40, stdev=1640.41 00:21:39.601 clat (usec): min=3151, max=26018, avg=10506.13, stdev=2762.48 00:21:39.601 lat (usec): min=3154, max=26023, avg=10509.88, stdev=2762.56 00:21:39.601 clat percentiles (usec): 00:21:39.601 | 1.00th=[ 5276], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 8160], 00:21:39.601 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10945], 00:21:39.601 | 70.00th=[11731], 80.00th=[12780], 90.00th=[14353], 95.00th=[15401], 00:21:39.601 | 99.00th=[17957], 99.50th=[19268], 99.90th=[20317], 99.95th=[20579], 00:21:39.601 | 99.99th=[20841] 00:21:39.601 bw ( KiB/s): min=50560, max=69376, per=50.50%, avg=59912.00, stdev=7715.40, samples=4 00:21:39.601 iops : min= 3160, max= 4336, avg=3744.50, stdev=482.21, samples=4 00:21:39.601 write: IOPS=4224, BW=66.0MiB/s (69.2MB/s)(123MiB/1856msec); 0 zone resets 00:21:39.601 slat (usec): min=30, max=185, avg=33.80, stdev= 5.33 00:21:39.601 clat (usec): min=6846, max=24120, avg=11920.30, stdev=2338.12 00:21:39.601 lat (usec): min=6879, max=24153, avg=11954.10, stdev=2339.04 00:21:39.601 clat percentiles (usec): 00:21:39.601 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:21:39.601 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11600], 60.00th=[12125], 00:21:39.601 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15008], 95.00th=[16188], 00:21:39.601 | 99.00th=[18482], 99.50th=[18744], 99.90th=[23200], 99.95th=[23987], 00:21:39.601 | 99.99th=[24249] 00:21:39.601 bw ( KiB/s): min=52640, max=72096, per=92.13%, avg=62264.00, stdev=7960.11, samples=4 00:21:39.601 iops : min= 3290, max= 4506, avg=3891.50, stdev=497.51, samples=4 00:21:39.601 lat (msec) : 4=0.06%, 10=38.63%, 20=61.09%, 50=0.22% 00:21:39.601 cpu : usr=73.39%, sys=22.57%, ctx=31, majf=0, minf=47 00:21:39.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:39.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:39.601 issued rwts: total=14888,7840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:39.601 00:21:39.601 Run status group 0 (all jobs): 00:21:39.601 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=233MiB (244MB), run=2008-2008msec 00:21:39.601 WRITE: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=123MiB (128MB), run=1856-1856msec 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.601 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.601 rmmod nvme_tcp 00:21:39.861 rmmod nvme_fabrics 00:21:39.861 rmmod nvme_keyring 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2704165 ']' 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2704165 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2704165 ']' 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2704165 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704165 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704165' 00:21:39.861 killing process with pid 2704165 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2704165 00:21:39.861 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2704165 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.120 00:58:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.022 00:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.022 00:21:42.022 real 0m12.489s 00:21:42.022 user 0m37.400s 00:21:42.022 sys 0m3.934s 00:21:42.022 00:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.022 00:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.022 ************************************ 00:21:42.022 END TEST nvmf_fio_host 00:21:42.022 ************************************ 00:21:42.279 00:58:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:42.279 00:58:16 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:42.279 00:58:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:42.279 00:58:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.279 00:58:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:42.279 ************************************ 00:21:42.279 START TEST nvmf_failover 00:21:42.279 ************************************ 00:21:42.279 00:58:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:42.279 * Looking for test storage... 00:21:42.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:42.279 00:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.279 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:42.279 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.279 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.280 00:58:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:44.179 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:44.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:44.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:44.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:44.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.180 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.438 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.438 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.438 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.438 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.438 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.438 00:58:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:21:44.438 00:21:44.438 --- 10.0.0.2 ping statistics --- 00:21:44.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.438 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:21:44.438 00:21:44.438 --- 10.0.0.1 ping statistics --- 00:21:44.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.438 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2707271 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2707271 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2707271 ']' 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.438 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.438 [2024-07-16 00:58:19.091163] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:21:44.438 [2024-07-16 00:58:19.091231] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.438 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.438 [2024-07-16 00:58:19.156705] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:44.695 [2024-07-16 00:58:19.272959] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.695 [2024-07-16 00:58:19.273019] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.695 [2024-07-16 00:58:19.273037] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.695 [2024-07-16 00:58:19.273051] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.695 [2024-07-16 00:58:19.273062] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.695 [2024-07-16 00:58:19.273165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.695 [2024-07-16 00:58:19.273259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.695 [2024-07-16 00:58:19.273262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.695 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.695 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:44.695 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.695 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.695 00:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.695 00:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.695 00:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:44.952 [2024-07-16 00:58:19.658827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.952 00:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:45.210 Malloc0 00:21:45.210 00:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.467 00:58:20 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:45.725 00:58:20 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.983 [2024-07-16 00:58:20.731753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.241 00:58:20 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:46.241 [2024-07-16 00:58:20.972379] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:46.241 00:58:20 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:46.500 [2024-07-16 00:58:21.233250] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2707564 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2707564 /var/tmp/bdevperf.sock 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2707564 ']' 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.500 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.063 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.063 00:58:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:47.063 00:58:21 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.319 NVMe0n1 00:21:47.320 00:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.916 00:21:47.916 00:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2707700 00:21:47.916 00:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.916 00:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:48.848 00:58:23 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.131 [2024-07-16 00:58:23.638263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b020c0 is same with the state(5) to be set 00:21:49.131 [2024-07-16 00:58:23.638377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b020c0 is same with the state(5) to be set 00:21:49.131 [2024-07-16 00:58:23.638408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b020c0 is same with the state(5) to be set 00:21:49.131 [2024-07-16 00:58:23.638421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b020c0 is same with the state(5) to be set 00:21:49.131 [2024-07-16 00:58:23.638433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b020c0 is same with the state(5) to be set 00:21:49.131 00:58:23 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:52.411 00:58:26 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.411 00:21:52.411 00:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:52.671 [2024-07-16 00:58:27.397170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 [2024-07-16 00:58:27.397843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03090 is same with the state(5) to be set 00:21:52.671 00:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:55.957 00:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.957 [2024-07-16 00:58:30.644985] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.957 00:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:57.334 00:58:31 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:57.334 [2024-07-16 00:58:31.948288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.948988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.949000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.949011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.334 [2024-07-16 00:58:31.949023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 [2024-07-16 00:58:31.949198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b040e0 is same with the state(5) to be set 00:21:57.335 00:58:31 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2707700 00:22:03.925 0 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2707564 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2707564 ']' 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2707564 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2707564 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2707564' 00:22:03.925 killing process with pid 2707564 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2707564 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2707564 00:22:03.925 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:03.925 [2024-07-16 00:58:21.296427] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:22:03.925 [2024-07-16 00:58:21.296508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707564 ] 00:22:03.925 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.925 [2024-07-16 00:58:21.354659] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.925 [2024-07-16 00:58:21.463312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.925 Running I/O for 15 seconds... 00:22:03.925 [2024-07-16 00:58:23.640796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.925 [2024-07-16 00:58:23.640838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.640896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.640930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.640947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.640961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.640976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.640991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.925 [2024-07-16 00:58:23.641293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.925 [2024-07-16 00:58:23.641309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.641984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.641999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.926 [2024-07-16 00:58:23.642297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.926 [2024-07-16 00:58:23.642325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.926 [2024-07-16 00:58:23.642357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.926 [2024-07-16 00:58:23.642374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.926 [2024-07-16 00:58:23.642387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.927 [2024-07-16 00:58:23.642416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.927 [2024-07-16 00:58:23.642445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.927 [2024-07-16 00:58:23.642473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.927 [2024-07-16 00:58:23.642501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.642981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.642994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.643010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.643024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.643039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.643052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.643067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.643080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.643099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.643113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.643128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.643141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.643165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.643179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.927 [2024-07-16 00:58:23.643194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.927 [2024-07-16 00:58:23.643207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.928 [2024-07-16 00:58:23.643245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.928 [2024-07-16 00:58:23.643274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.928 [2024-07-16 00:58:23.643302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.928 [2024-07-16 00:58:23.643330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.928 [2024-07-16 00:58:23.643358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.928 [2024-07-16 00:58:23.643386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.928 [2024-07-16 00:58:23.643414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.928 [2024-07-16 00:58:23.643443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.643953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.643966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.643979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.643989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.644000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.644013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.644026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.644036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.644047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.644059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.644072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.644083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.644094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.644119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.644129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.644140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.644152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.928 [2024-07-16 00:58:23.644165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.928 [2024-07-16 00:58:23.644176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.928 [2024-07-16 00:58:23.644187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:22:03.928 [2024-07-16 00:58:23.644200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.929 [2024-07-16 00:58:23.644913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:22:03.929 [2024-07-16 00:58:23.644926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.929 [2024-07-16 00:58:23.644938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.929 [2024-07-16 00:58:23.644949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.644960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.644972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.644985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.644995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.930 [2024-07-16 00:58:23.645471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.930 [2024-07-16 00:58:23.645482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:22:03.930 [2024-07-16 00:58:23.645494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645551] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1eeb380 was disconnected and freed. reset controller. 00:22:03.930 [2024-07-16 00:58:23.645568] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:03.930 [2024-07-16 00:58:23.645602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.930 [2024-07-16 00:58:23.645619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.930 [2024-07-16 00:58:23.645647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.930 [2024-07-16 00:58:23.645672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.930 [2024-07-16 00:58:23.645698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:23.645711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.930 [2024-07-16 00:58:23.645756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec52e0 (9): Bad file descriptor 00:22:03.930 [2024-07-16 00:58:23.648996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.930 [2024-07-16 00:58:23.681396] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:03.930 [2024-07-16 00:58:27.399059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-07-16 00:58:27.399109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:27.399140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.930 [2024-07-16 00:58:27.399162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:27.399179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.930 [2024-07-16 00:58:27.399194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.930 [2024-07-16 00:58:27.399209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.930 [2024-07-16 00:58:27.399224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-07-16 00:58:27.399756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.399972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.399987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.400001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.931 [2024-07-16 00:58:27.400016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.931 [2024-07-16 00:58:27.400029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.400985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.400999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.401013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.401031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.401047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.401061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.401076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.401090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.401105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.932 [2024-07-16 00:58:27.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.932 [2024-07-16 00:58:27.401134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.933 [2024-07-16 00:58:27.401911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.401997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.933 [2024-07-16 00:58:27.402216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.933 [2024-07-16 00:58:27.402230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.934 [2024-07-16 00:58:27.402258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.934 [2024-07-16 00:58:27.402286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.934 [2024-07-16 00:58:27.402314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.934 [2024-07-16 00:58:27.402343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.934 [2024-07-16 00:58:27.402371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88768 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88776 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.402948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.402959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.402969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88784 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.402987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.403012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.403022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88792 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.403035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.403058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.403069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88800 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.403081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.403104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.403115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88808 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.403128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.403151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.403162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88816 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.403174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.934 [2024-07-16 00:58:27.403198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.934 [2024-07-16 00:58:27.403209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88824 len:8 PRP1 0x0 PRP2 0x0 00:22:03.934 [2024-07-16 00:58:27.403222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403289] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2090080 was disconnected and freed. reset controller. 00:22:03.934 [2024-07-16 00:58:27.403307] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:03.934 [2024-07-16 00:58:27.403343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.934 [2024-07-16 00:58:27.403361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.934 [2024-07-16 00:58:27.403393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.934 [2024-07-16 00:58:27.403420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.934 [2024-07-16 00:58:27.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.934 [2024-07-16 00:58:27.403458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.934 [2024-07-16 00:58:27.406702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.934 [2024-07-16 00:58:27.406743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec52e0 (9): Bad file descriptor 00:22:03.935 [2024-07-16 00:58:27.531198] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:03.935 [2024-07-16 00:58:31.951127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.935 [2024-07-16 00:58:31.951189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.935 [2024-07-16 00:58:31.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.951981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.951996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.952010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.952025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.952039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.952054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.952067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.952082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.952096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.935 [2024-07-16 00:58:31.952111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.935 [2024-07-16 00:58:31.952125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.936 [2024-07-16 00:58:31.952903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.936 [2024-07-16 00:58:31.952932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.952983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.952998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.953013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.953026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.953041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.953055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.953070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.953083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.953098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.953112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.953127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.953140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.936 [2024-07-16 00:58:31.953155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.936 [2024-07-16 00:58:31.953169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.937 [2024-07-16 00:58:31.953458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.953505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45848 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.953519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.953717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.953729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45856 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.953743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.953770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.953781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45864 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.953794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.953818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.953829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45872 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.953841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.953864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.953881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45880 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.953896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.953925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.953937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45888 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.953950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.953963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.953973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.953984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45896 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.953997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45904 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45912 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45920 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45928 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45936 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45944 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45952 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45960 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45968 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45976 len:8 PRP1 0x0 PRP2 0x0 00:22:03.937 [2024-07-16 00:58:31.954466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.937 [2024-07-16 00:58:31.954478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.937 [2024-07-16 00:58:31.954489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.937 [2024-07-16 00:58:31.954500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45984 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45992 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46000 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46008 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46016 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46024 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46032 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46040 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46048 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46056 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.954962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.954973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46064 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.954985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.954998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46072 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46080 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46088 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46096 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45216 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45224 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45232 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45240 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45248 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45256 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46104 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.938 [2024-07-16 00:58:31.955586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.938 [2024-07-16 00:58:31.955596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.938 [2024-07-16 00:58:31.955607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46112 len:8 PRP1 0x0 PRP2 0x0 00:22:03.938 [2024-07-16 00:58:31.955620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.955654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46120 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.955667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.955701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46128 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.955714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.955749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46136 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.955765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.955800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46144 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.955812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.955852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46152 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.955865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.955909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46160 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.955921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.955955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46168 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.955967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.955980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.955990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46176 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46184 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46192 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46200 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45184 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45192 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45280 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45304 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45312 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45320 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.939 [2024-07-16 00:58:31.956722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 PRP1 0x0 PRP2 0x0 00:22:03.939 [2024-07-16 00:58:31.956734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.939 [2024-07-16 00:58:31.956747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.939 [2024-07-16 00:58:31.956757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.956768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45352 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.956793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.956804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.956814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45360 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.956827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.956840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.956850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.956861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45368 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.956897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.956908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.956919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.956931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.956943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.956954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.956965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.956977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.956990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45400 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45408 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45416 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45424 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45432 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45440 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45448 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45456 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45464 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45472 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45480 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45488 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45496 len:8 PRP1 0x0 PRP2 0x0 00:22:03.940 [2024-07-16 00:58:31.957626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.940 [2024-07-16 00:58:31.957638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.940 [2024-07-16 00:58:31.957649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.940 [2024-07-16 00:58:31.957659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45504 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.957671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.963816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.963844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.963858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45512 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.963871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.963895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.963907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.963918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45520 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.963931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.963944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.963955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.963966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45528 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.963978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.963991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45536 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45544 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45552 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45560 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45568 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45576 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45584 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45592 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45600 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45608 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45616 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45624 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45632 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45640 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45648 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45656 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45664 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45672 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45680 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.941 [2024-07-16 00:58:31.964907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.941 [2024-07-16 00:58:31.964918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45688 len:8 PRP1 0x0 PRP2 0x0 00:22:03.941 [2024-07-16 00:58:31.964930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.941 [2024-07-16 00:58:31.964943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.964953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.964964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45696 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.964976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.964989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.964999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45208 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45704 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45712 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45720 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45736 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45744 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45752 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45760 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45768 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45776 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45784 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45792 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45800 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45808 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45816 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45824 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45832 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45840 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.965929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.942 [2024-07-16 00:58:31.965940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.942 [2024-07-16 00:58:31.965954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45848 len:8 PRP1 0x0 PRP2 0x0 00:22:03.942 [2024-07-16 00:58:31.965967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.942 [2024-07-16 00:58:31.966037] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x208fd40 was disconnected and freed. reset controller. 00:22:03.942 [2024-07-16 00:58:31.966055] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:03.942 [2024-07-16 00:58:31.966096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.943 [2024-07-16 00:58:31.966114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.943 [2024-07-16 00:58:31.966130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.943 [2024-07-16 00:58:31.966144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.943 [2024-07-16 00:58:31.966157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.943 [2024-07-16 00:58:31.966170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.943 [2024-07-16 00:58:31.966183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.943 [2024-07-16 00:58:31.966195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.943 [2024-07-16 00:58:31.966208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.943 [2024-07-16 00:58:31.966252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec52e0 (9): Bad file descriptor 00:22:03.943 [2024-07-16 00:58:31.969522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.943 [2024-07-16 00:58:32.135404] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:03.943 00:22:03.943 Latency(us) 00:22:03.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.943 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:03.943 Verification LBA range: start 0x0 length 0x4000 00:22:03.943 NVMe0n1 : 15.01 8590.64 33.56 846.24 0.00 13536.91 807.06 23787.14 00:22:03.943 =================================================================================================================== 00:22:03.943 Total : 8590.64 33.56 846.24 0.00 13536.91 807.06 23787.14 00:22:03.943 Received shutdown signal, test time was about 15.000000 seconds 00:22:03.943 00:22:03.943 Latency(us) 00:22:03.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.943 =================================================================================================================== 00:22:03.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2709424 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2709424 /var/tmp/bdevperf.sock 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2709424 ']' 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.943 00:58:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:03.943 00:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.943 00:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:03.943 00:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:03.943 [2024-07-16 00:58:38.390721] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:03.943 00:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:03.943 [2024-07-16 00:58:38.631417] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:03.943 00:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.511 NVMe0n1 00:22:04.511 00:58:39 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.768 00:22:04.768 00:58:39 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:05.333 00:22:05.333 00:58:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.333 00:58:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:05.333 00:58:40 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:05.592 00:58:40 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:08.872 00:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.872 00:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:08.872 00:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2710119 00:22:08.872 00:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:08.872 00:58:43 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2710119 00:22:10.246 0 00:22:10.246 00:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:10.246 [2024-07-16 00:58:37.884008] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:22:10.246 [2024-07-16 00:58:37.884095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709424 ] 00:22:10.246 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.246 [2024-07-16 00:58:37.945599] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.246 [2024-07-16 00:58:38.058546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.246 [2024-07-16 00:58:40.288568] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:10.246 [2024-07-16 00:58:40.288679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.246 [2024-07-16 00:58:40.288702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.246 [2024-07-16 00:58:40.288720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.246 [2024-07-16 00:58:40.288734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.246 [2024-07-16 00:58:40.288748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.246 [2024-07-16 00:58:40.288761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.246 [2024-07-16 00:58:40.288775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.246 [2024-07-16 00:58:40.288788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.246 [2024-07-16 00:58:40.288802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:10.246 [2024-07-16 00:58:40.288853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:10.246 [2024-07-16 00:58:40.288899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21562e0 (9): Bad file descriptor 00:22:10.246 [2024-07-16 00:58:40.303117] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:10.246 Running I/O for 1 seconds... 00:22:10.246 00:22:10.246 Latency(us) 00:22:10.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.246 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:10.246 Verification LBA range: start 0x0 length 0x4000 00:22:10.246 NVMe0n1 : 1.01 8678.25 33.90 0.00 0.00 14686.44 3021.94 12815.93 00:22:10.246 =================================================================================================================== 00:22:10.246 Total : 8678.25 33.90 0.00 0.00 14686.44 3021.94 12815.93 00:22:10.246 00:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.246 00:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:10.246 00:58:44 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.503 00:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.504 00:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:10.761 00:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.019 00:58:45 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2709424 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2709424 ']' 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2709424 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2709424 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2709424' 00:22:14.305 killing process with pid 2709424 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2709424 00:22:14.305 00:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2709424 00:22:14.572 00:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:14.572 00:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.857 rmmod nvme_tcp 00:22:14.857 rmmod nvme_fabrics 00:22:14.857 rmmod nvme_keyring 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2707271 ']' 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2707271 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2707271 ']' 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2707271 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2707271 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2707271' 00:22:14.857 killing process with pid 2707271 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2707271 00:22:14.857 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2707271 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.427 00:58:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.338 00:58:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.338 00:22:17.338 real 0m35.117s 00:22:17.338 user 2m3.674s 00:22:17.338 sys 0m5.845s 00:22:17.338 00:58:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:17.338 00:58:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:17.338 ************************************ 00:22:17.338 END TEST nvmf_failover 00:22:17.338 ************************************ 00:22:17.338 00:58:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:17.338 00:58:51 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:17.338 00:58:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:17.338 00:58:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.338 00:58:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:17.338 ************************************ 00:22:17.338 START TEST nvmf_host_discovery 00:22:17.338 ************************************ 00:22:17.338 00:58:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:17.338 * Looking for test storage... 00:22:17.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.338 00:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:19.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:19.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:19.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:19.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:22:19.239 00:22:19.239 --- 10.0.0.2 ping statistics --- 00:22:19.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.239 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:22:19.239 00:22:19.239 --- 10.0.0.1 ping statistics --- 00:22:19.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.239 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2712811 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2712811 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2712811 ']' 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.239 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.240 00:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.499 [2024-07-16 00:58:54.029077] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:22:19.499 [2024-07-16 00:58:54.029156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.499 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.499 [2024-07-16 00:58:54.097382] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.499 [2024-07-16 00:58:54.214080] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.499 [2024-07-16 00:58:54.214141] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.499 [2024-07-16 00:58:54.214158] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.499 [2024-07-16 00:58:54.214178] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.499 [2024-07-16 00:58:54.214190] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.499 [2024-07-16 00:58:54.214220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 [2024-07-16 00:58:55.032916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 [2024-07-16 00:58:55.041073] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 null0 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 null1 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2712957 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2712957 /tmp/host.sock 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2712957 ']' 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:20.431 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.431 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 [2024-07-16 00:58:55.113440] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:22:20.431 [2024-07-16 00:58:55.113505] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712957 ] 00:22:20.431 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.431 [2024-07-16 00:58:55.175260] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.688 [2024-07-16 00:58:55.291975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.688 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.945 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.203 [2024-07-16 00:58:55.714951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:21.203 00:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:21.769 [2024-07-16 00:58:56.441438] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:21.769 [2024-07-16 00:58:56.441464] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:21.769 [2024-07-16 00:58:56.441491] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:22.028 [2024-07-16 00:58:56.528759] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:22.028 [2024-07-16 00:58:56.714161] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:22.028 [2024-07-16 00:58:56.714202] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:22.287 00:58:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.287 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.545 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 [2024-07-16 00:58:57.159340] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:22.546 [2024-07-16 00:58:57.160477] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:22.546 [2024-07-16 00:58:57.160524] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 [2024-07-16 00:58:57.287405] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:22.546 00:58:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:22.805 [2024-07-16 00:58:57.387362] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:22.806 [2024-07-16 00:58:57.387388] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:22.806 [2024-07-16 00:58:57.387399] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.744 [2024-07-16 00:58:58.379431] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:23.744 [2024-07-16 00:58:58.379478] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:23.744 [2024-07-16 00:58:58.385343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.744 [2024-07-16 00:58:58.385378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-07-16 00:58:58.385397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.744 [2024-07-16 00:58:58.385412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-07-16 00:58:58.385427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.744 [2024-07-16 00:58:58.385441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-07-16 00:58:58.385456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.744 [2024-07-16 00:58:58.385479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-07-16 00:58:58.385493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:23.744 [2024-07-16 00:58:58.395347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.744 [2024-07-16 00:58:58.405391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.744 [2024-07-16 00:58:58.405693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.744 [2024-07-16 00:58:58.405726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43e20 with addr=10.0.0.2, port=4420 00:22:23.744 [2024-07-16 00:58:58.405744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.744 [2024-07-16 00:58:58.405769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.744 [2024-07-16 00:58:58.405808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.744 [2024-07-16 00:58:58.405828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.744 [2024-07-16 00:58:58.405847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.744 [2024-07-16 00:58:58.405870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.744 [2024-07-16 00:58:58.415477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.744 [2024-07-16 00:58:58.415675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.744 [2024-07-16 00:58:58.415705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43e20 with addr=10.0.0.2, port=4420 00:22:23.744 [2024-07-16 00:58:58.415722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.744 [2024-07-16 00:58:58.415746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.744 [2024-07-16 00:58:58.415768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.744 [2024-07-16 00:58:58.415783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.744 [2024-07-16 00:58:58.415798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.744 [2024-07-16 00:58:58.415818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:23.744 [2024-07-16 00:58:58.425556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:23.744 [2024-07-16 00:58:58.425813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.744 [2024-07-16 00:58:58.425847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43e20 with addr=10.0.0.2, port=4420 00:22:23.744 [2024-07-16 00:58:58.425865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.744 [2024-07-16 00:58:58.425897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.744 [2024-07-16 00:58:58.425964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.744 [2024-07-16 00:58:58.425984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.744 [2024-07-16 00:58:58.425998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.744 [2024-07-16 00:58:58.426017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.744 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.744 [2024-07-16 00:58:58.435642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.744 [2024-07-16 00:58:58.435896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.744 [2024-07-16 00:58:58.435943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43e20 with addr=10.0.0.2, port=4420 00:22:23.745 [2024-07-16 00:58:58.435960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.745 [2024-07-16 00:58:58.435982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.745 [2024-07-16 00:58:58.436043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.745 [2024-07-16 00:58:58.436062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.745 [2024-07-16 00:58:58.436076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.745 [2024-07-16 00:58:58.436095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.745 [2024-07-16 00:58:58.445722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.745 [2024-07-16 00:58:58.445950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.745 [2024-07-16 00:58:58.445978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43e20 with addr=10.0.0.2, port=4420 00:22:23.745 [2024-07-16 00:58:58.445993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.745 [2024-07-16 00:58:58.446015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.745 [2024-07-16 00:58:58.446035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.745 [2024-07-16 00:58:58.446048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.745 [2024-07-16 00:58:58.446062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.745 [2024-07-16 00:58:58.446099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.745 [2024-07-16 00:58:58.455799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.745 [2024-07-16 00:58:58.456053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.745 [2024-07-16 00:58:58.456081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43e20 with addr=10.0.0.2, port=4420 00:22:23.745 [2024-07-16 00:58:58.456097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.745 [2024-07-16 00:58:58.456118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.745 [2024-07-16 00:58:58.456163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.745 [2024-07-16 00:58:58.456196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.745 [2024-07-16 00:58:58.456213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.745 [2024-07-16 00:58:58.456234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.745 [2024-07-16 00:58:58.465871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.745 [2024-07-16 00:58:58.466195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.745 [2024-07-16 00:58:58.466226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43e20 with addr=10.0.0.2, port=4420 00:22:23.745 [2024-07-16 00:58:58.466243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43e20 is same with the state(5) to be set 00:22:23.745 [2024-07-16 00:58:58.466267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43e20 (9): Bad file descriptor 00:22:23.745 [2024-07-16 00:58:58.466323] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:23.745 [2024-07-16 00:58:58.466352] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:23.745 [2024-07-16 00:58:58.466388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.745 [2024-07-16 00:58:58.466411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.745 [2024-07-16 00:58:58.466427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.745 [2024-07-16 00:58:58.466451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:23.745 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.005 00:58:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.381 [2024-07-16 00:58:59.750070] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:25.381 [2024-07-16 00:58:59.750091] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:25.381 [2024-07-16 00:58:59.750117] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:25.381 [2024-07-16 00:58:59.837412] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:25.381 [2024-07-16 00:58:59.943896] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:25.381 [2024-07-16 00:58:59.943947] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.381 request: 00:22:25.381 { 00:22:25.381 "name": "nvme", 00:22:25.381 "trtype": "tcp", 00:22:25.381 "traddr": "10.0.0.2", 00:22:25.381 "adrfam": "ipv4", 00:22:25.381 "trsvcid": "8009", 00:22:25.381 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:25.381 "wait_for_attach": true, 00:22:25.381 "method": "bdev_nvme_start_discovery", 00:22:25.381 "req_id": 1 00:22:25.381 } 00:22:25.381 Got JSON-RPC error response 00:22:25.381 response: 00:22:25.381 { 00:22:25.381 "code": -17, 00:22:25.381 "message": "File exists" 00:22:25.381 } 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.381 00:58:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.381 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.381 request: 00:22:25.381 { 00:22:25.381 "name": "nvme_second", 00:22:25.381 "trtype": "tcp", 00:22:25.381 "traddr": "10.0.0.2", 00:22:25.381 "adrfam": "ipv4", 00:22:25.381 "trsvcid": "8009", 00:22:25.381 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:25.381 "wait_for_attach": true, 00:22:25.381 "method": "bdev_nvme_start_discovery", 00:22:25.381 "req_id": 1 00:22:25.381 } 00:22:25.381 Got JSON-RPC error response 00:22:25.382 response: 00:22:25.382 { 00:22:25.382 "code": -17, 00:22:25.382 "message": "File exists" 00:22:25.382 } 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.382 00:59:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.761 [2024-07-16 00:59:01.135678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.761 [2024-07-16 00:59:01.135729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf44ba0 with addr=10.0.0.2, port=8010 00:22:26.761 [2024-07-16 00:59:01.135779] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:26.761 [2024-07-16 00:59:01.135798] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:26.761 [2024-07-16 00:59:01.135812] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:27.698 [2024-07-16 00:59:02.138191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.698 [2024-07-16 00:59:02.138272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf44ba0 with addr=10.0.0.2, port=8010 00:22:27.698 [2024-07-16 00:59:02.138309] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:27.698 [2024-07-16 00:59:02.138326] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:27.698 [2024-07-16 00:59:02.138341] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:28.674 [2024-07-16 00:59:03.140207] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:28.674 request: 00:22:28.674 { 00:22:28.674 "name": "nvme_second", 00:22:28.674 "trtype": "tcp", 00:22:28.674 "traddr": "10.0.0.2", 00:22:28.674 "adrfam": "ipv4", 00:22:28.674 "trsvcid": "8010", 00:22:28.674 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:28.674 "wait_for_attach": false, 00:22:28.674 "attach_timeout_ms": 3000, 00:22:28.674 "method": "bdev_nvme_start_discovery", 00:22:28.674 "req_id": 1 00:22:28.674 } 00:22:28.674 Got JSON-RPC error response 00:22:28.674 response: 00:22:28.674 { 00:22:28.674 "code": -110, 00:22:28.674 "message": "Connection timed out" 00:22:28.674 } 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2712957 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.674 rmmod nvme_tcp 00:22:28.674 rmmod nvme_fabrics 00:22:28.674 rmmod nvme_keyring 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2712811 ']' 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2712811 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2712811 ']' 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2712811 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2712811 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2712811' 00:22:28.674 killing process with pid 2712811 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2712811 00:22:28.674 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2712811 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.931 00:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.492 00:22:31.492 real 0m13.632s 00:22:31.492 user 0m19.834s 00:22:31.492 sys 0m2.645s 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.492 ************************************ 00:22:31.492 END TEST nvmf_host_discovery 00:22:31.492 ************************************ 00:22:31.492 00:59:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:31.492 00:59:05 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:31.492 00:59:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:31.492 00:59:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.492 00:59:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.492 ************************************ 00:22:31.492 START TEST nvmf_host_multipath_status 00:22:31.492 ************************************ 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:31.492 * Looking for test storage... 00:22:31.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.492 00:59:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.399 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.399 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:22:33.400 00:22:33.400 --- 10.0.0.2 ping statistics --- 00:22:33.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.400 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:22:33.400 00:22:33.400 --- 10.0.0.1 ping statistics --- 00:22:33.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.400 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2715993 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2715993 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2715993 ']' 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.400 00:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:33.400 [2024-07-16 00:59:07.877446] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:22:33.400 [2024-07-16 00:59:07.877527] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.400 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.400 [2024-07-16 00:59:07.944767] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:33.400 [2024-07-16 00:59:08.060019] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.400 [2024-07-16 00:59:08.060083] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.400 [2024-07-16 00:59:08.060099] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.400 [2024-07-16 00:59:08.060112] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.400 [2024-07-16 00:59:08.060124] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.400 [2024-07-16 00:59:08.060212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.400 [2024-07-16 00:59:08.060218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.366 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.366 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:34.366 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.366 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.366 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:34.366 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.367 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2715993 00:22:34.367 00:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:34.367 [2024-07-16 00:59:09.108022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.624 00:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:34.884 Malloc0 00:22:34.884 00:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:35.145 00:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.405 00:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.405 [2024-07-16 00:59:10.140870] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:35.663 [2024-07-16 00:59:10.381479] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2716282 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2716282 /var/tmp/bdevperf.sock 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2716282 ']' 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.663 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:36.230 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.230 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:36.230 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:36.230 00:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:36.796 Nvme0n1 00:22:36.796 00:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:37.366 Nvme0n1 00:22:37.366 00:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:37.366 00:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:39.266 00:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:39.266 00:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:39.523 00:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:39.781 00:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:40.717 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:40.717 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:40.717 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.717 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.976 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.976 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:40.976 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.976 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:41.234 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:41.234 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:41.234 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.234 00:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:41.491 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.491 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:41.491 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.491 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.749 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.749 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.749 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.749 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:42.007 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.007 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:42.007 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.007 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:42.265 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.265 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:42.265 00:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:42.523 00:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:42.781 00:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:43.718 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:43.718 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:43.719 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.719 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:43.977 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.977 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:43.977 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.977 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:44.235 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.235 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:44.235 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.235 00:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:44.494 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.494 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:44.494 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.494 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.752 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.752 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:44.752 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.752 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:45.010 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.010 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:45.010 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.010 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:45.269 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.269 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:45.269 00:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:45.576 00:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:45.834 00:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:46.772 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:46.772 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:46.772 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.772 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:47.031 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.031 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:47.031 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.031 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:47.289 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.289 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:47.289 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.289 00:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:47.547 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.547 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:47.547 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.547 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:47.805 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.805 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:47.805 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.805 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:48.062 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.062 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:48.062 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.062 00:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:48.319 00:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.319 00:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:48.319 00:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:48.576 00:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:48.832 00:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.202 00:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.460 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.460 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.460 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.460 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:50.717 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.717 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:50.717 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.717 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:50.974 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.974 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:50.974 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.974 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:51.230 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.230 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:51.230 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.230 00:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:51.487 00:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.487 00:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:51.487 00:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:51.744 00:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:52.002 00:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:52.937 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:52.937 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:52.937 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.937 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:53.195 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.195 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:53.195 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.195 00:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:53.452 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.452 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:53.452 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.452 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:53.709 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.709 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:53.709 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.710 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:53.967 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.967 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:53.967 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.967 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:54.224 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.224 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:54.224 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.224 00:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:54.482 00:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.482 00:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:54.482 00:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:54.739 00:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:54.997 00:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:55.933 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:55.933 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:55.933 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.933 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:56.192 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:56.192 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:56.192 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.192 00:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:56.450 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.450 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:56.450 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.450 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:56.709 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.709 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:56.709 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.709 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:56.967 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.967 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:56.967 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.967 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:57.225 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.225 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:57.225 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.225 00:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:57.484 00:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.484 00:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:57.742 00:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:57.742 00:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:58.000 00:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:58.258 00:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:59.228 00:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:59.228 00:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:59.228 00:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.228 00:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:59.515 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.515 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:59.515 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.515 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:59.773 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.773 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:59.773 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.773 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:00.032 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.032 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:00.032 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.032 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:00.290 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.290 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:00.290 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.290 00:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.549 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.549 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:00.549 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.549 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:00.807 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.807 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:00.807 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:01.066 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:01.325 00:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:02.260 00:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:02.260 00:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:02.260 00:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.260 00:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:02.518 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.518 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:02.518 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.519 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:02.775 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.775 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:02.775 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.775 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:03.032 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.032 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:03.032 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.032 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:03.289 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.289 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:03.289 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.289 00:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:03.546 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.546 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:03.546 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.546 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:03.802 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.802 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:03.802 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:04.060 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:04.318 00:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:05.254 00:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:05.254 00:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:05.254 00:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.254 00:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:05.512 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.512 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:05.512 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.512 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:05.770 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.770 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:05.770 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.770 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:06.027 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.027 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:06.027 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.027 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:06.283 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.283 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:06.283 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.283 00:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:06.540 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.540 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:06.540 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.540 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:06.796 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.796 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:06.796 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:07.052 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:07.309 00:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:08.271 00:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:08.271 00:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:08.271 00:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.271 00:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:08.528 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.528 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:08.528 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.528 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:08.784 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.784 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:08.784 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.784 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:09.040 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.041 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:09.041 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.041 00:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:09.296 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.296 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:09.296 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.296 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:09.553 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.553 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:09.553 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.553 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2716282 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2716282 ']' 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2716282 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716282 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716282' 00:23:09.811 killing process with pid 2716282 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2716282 00:23:09.811 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2716282 00:23:10.071 Connection closed with partial response: 00:23:10.071 00:23:10.071 00:23:10.334 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2716282 00:23:10.334 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.334 [2024-07-16 00:59:10.444503] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:23:10.334 [2024-07-16 00:59:10.444583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716282 ] 00:23:10.334 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.334 [2024-07-16 00:59:10.504701] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.334 [2024-07-16 00:59:10.611286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.334 Running I/O for 90 seconds... 00:23:10.334 [2024-07-16 00:59:26.297460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.334 [2024-07-16 00:59:26.297531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.297966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.297983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.298005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.298033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.298057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.298074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.298096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.298112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.298135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.298152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.298175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.298191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.298235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.298252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.298992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.299237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.299270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.299289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.299327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.299344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.299366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.299383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.299406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.299444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.299460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.299482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.334 [2024-07-16 00:59:26.299499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:10.334 [2024-07-16 00:59:26.299526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.299544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.299567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.299583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.299776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.299795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.299817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.299834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.299857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.299874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.299905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.299923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.299946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.299963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.299986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.300955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.300978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:10.335 [2024-07-16 00:59:26.301819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.335 [2024-07-16 00:59:26.301842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.301873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.301901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.301930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.301948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.301974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.301991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-16 00:59:26.302836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.302901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.302958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.302985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.336 [2024-07-16 00:59:26.303718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:10.336 [2024-07-16 00:59:26.303745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.303762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.303790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.303806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.303833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.303853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.303903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.303923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.303951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.303969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.303996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:26.304470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:26.304712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:26.304729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.968746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.968800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.968894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.968917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.968957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.968975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.968998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.337 [2024-07-16 00:59:41.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:10.337 [2024-07-16 00:59:41.969630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.337 [2024-07-16 00:59:41.969646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.969974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.969996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.970012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.970056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.970096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.970135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.970173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.970226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.970263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.970300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.970337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.970359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.970376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.973712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.973741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.973805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.973843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.973860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.973905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.973930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.973954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.338 [2024-07-16 00:59:41.973972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.973994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.974011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.974033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.974049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:10.338 [2024-07-16 00:59:41.974072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.338 [2024-07-16 00:59:41.974089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:10.338 Received shutdown signal, test time was about 32.549633 seconds 00:23:10.338 00:23:10.338 Latency(us) 00:23:10.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.338 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.338 Verification LBA range: start 0x0 length 0x4000 00:23:10.338 Nvme0n1 : 32.55 6645.28 25.96 0.00 0.00 19226.38 1080.13 4026531.84 00:23:10.338 =================================================================================================================== 00:23:10.338 Total : 6645.28 25.96 0.00 0.00 19226.38 1080.13 4026531.84 00:23:10.338 00:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.338 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:10.338 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.598 rmmod nvme_tcp 00:23:10.598 rmmod nvme_fabrics 00:23:10.598 rmmod nvme_keyring 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2715993 ']' 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2715993 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2715993 ']' 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2715993 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2715993 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2715993' 00:23:10.598 killing process with pid 2715993 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2715993 00:23:10.598 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2715993 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.857 00:59:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.424 00:59:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.424 00:23:13.424 real 0m41.878s 00:23:13.424 user 1m51.932s 00:23:13.424 sys 0m15.138s 00:23:13.424 00:59:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:13.424 00:59:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:13.424 ************************************ 00:23:13.424 END TEST nvmf_host_multipath_status 00:23:13.424 ************************************ 00:23:13.424 00:59:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:13.424 00:59:47 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:13.424 00:59:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:13.424 00:59:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.424 00:59:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.424 ************************************ 00:23:13.424 START TEST nvmf_discovery_remove_ifc 00:23:13.424 ************************************ 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:13.424 * Looking for test storage... 00:23:13.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.424 00:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:14.803 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:14.803 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:14.803 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:14.803 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:14.803 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.804 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:23:15.063 00:23:15.063 --- 10.0.0.2 ping statistics --- 00:23:15.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.063 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:15.063 00:23:15.063 --- 10.0.0.1 ping statistics --- 00:23:15.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.063 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2722484 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2722484 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2722484 ']' 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.063 00:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.063 [2024-07-16 00:59:49.701152] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:23:15.063 [2024-07-16 00:59:49.701242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.063 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.063 [2024-07-16 00:59:49.769094] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.321 [2024-07-16 00:59:49.884198] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.321 [2024-07-16 00:59:49.884257] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.321 [2024-07-16 00:59:49.884273] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.321 [2024-07-16 00:59:49.884286] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.321 [2024-07-16 00:59:49.884298] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.321 [2024-07-16 00:59:49.884328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.257 [2024-07-16 00:59:50.719711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.257 [2024-07-16 00:59:50.727896] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:16.257 null0 00:23:16.257 [2024-07-16 00:59:50.759815] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2722637 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2722637 /tmp/host.sock 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2722637 ']' 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:16.257 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.257 00:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.257 [2024-07-16 00:59:50.827887] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:23:16.257 [2024-07-16 00:59:50.827962] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722637 ] 00:23:16.257 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.257 [2024-07-16 00:59:50.885203] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.257 [2024-07-16 00:59:50.999287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.515 00:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.454 [2024-07-16 00:59:52.198835] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:17.454 [2024-07-16 00:59:52.198886] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:17.454 [2024-07-16 00:59:52.198910] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.728 [2024-07-16 00:59:52.325341] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:17.728 [2024-07-16 00:59:52.429161] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:17.728 [2024-07-16 00:59:52.429239] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:17.728 [2024-07-16 00:59:52.429279] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:17.728 [2024-07-16 00:59:52.429301] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:17.728 [2024-07-16 00:59:52.429332] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.728 [2024-07-16 00:59:52.436378] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16e29d0 was disconnected and freed. delete nvme_qpair. 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:17.728 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:17.987 00:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:18.924 00:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:19.860 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.860 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.860 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.860 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.860 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.860 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:19.860 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.118 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.118 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:20.118 00:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.054 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.054 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.054 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.054 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.054 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.054 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.054 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.055 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.055 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:21.055 00:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.989 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.246 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.246 00:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:23.181 00:59:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.181 [2024-07-16 00:59:57.870519] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:23.181 [2024-07-16 00:59:57.870622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.181 [2024-07-16 00:59:57.870645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.181 [2024-07-16 00:59:57.870664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.181 [2024-07-16 00:59:57.870686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.181 [2024-07-16 00:59:57.870701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.181 [2024-07-16 00:59:57.870713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.181 [2024-07-16 00:59:57.870726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.181 [2024-07-16 00:59:57.870738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.181 [2024-07-16 00:59:57.870751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.181 [2024-07-16 00:59:57.870763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.181 [2024-07-16 00:59:57.870775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a94e0 is same with the state(5) to be set 00:23:23.181 [2024-07-16 00:59:57.880536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a94e0 (9): Bad file descriptor 00:23:23.181 [2024-07-16 00:59:57.890581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.118 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.118 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.118 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.118 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.118 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.118 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.118 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.377 [2024-07-16 00:59:58.930926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:24.377 [2024-07-16 00:59:58.931006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a94e0 with addr=10.0.0.2, port=4420 00:23:24.377 [2024-07-16 00:59:58.931040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a94e0 is same with the state(5) to be set 00:23:24.377 [2024-07-16 00:59:58.931101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a94e0 (9): Bad file descriptor 00:23:24.377 [2024-07-16 00:59:58.931611] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:24.377 [2024-07-16 00:59:58.931647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.377 [2024-07-16 00:59:58.931665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.377 [2024-07-16 00:59:58.931685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.377 [2024-07-16 00:59:58.931718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.377 [2024-07-16 00:59:58.931738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.377 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.377 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:24.377 00:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:25.312 [2024-07-16 00:59:59.934263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.312 [2024-07-16 00:59:59.934341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.312 [2024-07-16 00:59:59.934364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.312 [2024-07-16 00:59:59.934381] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:25.312 [2024-07-16 00:59:59.934411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.312 [2024-07-16 00:59:59.934452] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:25.312 [2024-07-16 00:59:59.934527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.312 [2024-07-16 00:59:59.934549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.312 [2024-07-16 00:59:59.934569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.312 [2024-07-16 00:59:59.934582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.312 [2024-07-16 00:59:59.934595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.313 [2024-07-16 00:59:59.934607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.313 [2024-07-16 00:59:59.934621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.313 [2024-07-16 00:59:59.934633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.313 [2024-07-16 00:59:59.934646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.313 [2024-07-16 00:59:59.934659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.313 [2024-07-16 00:59:59.934672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:25.313 [2024-07-16 00:59:59.934742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a8960 (9): Bad file descriptor 00:23:25.313 [2024-07-16 00:59:59.935745] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:25.313 [2024-07-16 00:59:59.935766] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:25.313 00:59:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:25.313 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.572 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:25.572 01:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:26.505 01:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.447 [2024-07-16 01:00:01.995102] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:27.447 [2024-07-16 01:00:01.995150] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:27.447 [2024-07-16 01:00:01.995187] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.447 [2024-07-16 01:00:02.081469] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.447 [2024-07-16 01:00:02.144395] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:27.447 [2024-07-16 01:00:02.144444] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:27.447 [2024-07-16 01:00:02.144478] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:27.447 [2024-07-16 01:00:02.144499] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:27.447 [2024-07-16 01:00:02.144511] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:27.447 [2024-07-16 01:00:02.151960] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16eb180 was disconnected and freed. delete nvme_qpair. 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:27.447 01:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2722637 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2722637 ']' 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2722637 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:28.838 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2722637 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2722637' 00:23:28.839 killing process with pid 2722637 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2722637 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2722637 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.839 rmmod nvme_tcp 00:23:28.839 rmmod nvme_fabrics 00:23:28.839 rmmod nvme_keyring 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2722484 ']' 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2722484 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2722484 ']' 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2722484 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.839 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2722484 00:23:29.098 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.098 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.098 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2722484' 00:23:29.098 killing process with pid 2722484 00:23:29.098 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2722484 00:23:29.098 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2722484 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.357 01:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.305 01:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.305 00:23:31.305 real 0m18.323s 00:23:31.305 user 0m26.665s 00:23:31.305 sys 0m2.938s 00:23:31.305 01:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:31.305 01:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.305 ************************************ 00:23:31.305 END TEST nvmf_discovery_remove_ifc 00:23:31.305 ************************************ 00:23:31.305 01:00:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:31.305 01:00:05 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:31.305 01:00:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:31.305 01:00:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.306 01:00:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:31.306 ************************************ 00:23:31.306 START TEST nvmf_identify_kernel_target 00:23:31.306 ************************************ 00:23:31.306 01:00:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:31.306 * Looking for test storage... 00:23:31.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.306 01:00:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:33.212 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:33.212 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:33.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:33.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.212 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:23:33.472 00:23:33.472 --- 10.0.0.2 ping statistics --- 00:23:33.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.472 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:33.472 00:23:33.472 --- 10.0.0.1 ping statistics --- 00:23:33.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.472 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.472 01:00:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:33.472 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:33.473 01:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:34.418 Waiting for block devices as requested 00:23:34.418 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:34.679 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:34.679 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:34.941 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:34.941 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:34.941 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:34.941 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:35.200 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:35.200 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:35.200 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:35.200 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:35.467 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:35.467 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:35.468 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:35.468 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:35.729 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:35.729 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:35.992 No valid GPT data, bailing 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:35.992 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:35.993 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:35.993 00:23:35.993 Discovery Log Number of Records 2, Generation counter 2 00:23:35.993 =====Discovery Log Entry 0====== 00:23:35.993 trtype: tcp 00:23:35.993 adrfam: ipv4 00:23:35.993 subtype: current discovery subsystem 00:23:35.993 treq: not specified, sq flow control disable supported 00:23:35.993 portid: 1 00:23:35.993 trsvcid: 4420 00:23:35.993 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:35.993 traddr: 10.0.0.1 00:23:35.993 eflags: none 00:23:35.993 sectype: none 00:23:35.993 =====Discovery Log Entry 1====== 00:23:35.993 trtype: tcp 00:23:35.993 adrfam: ipv4 00:23:35.993 subtype: nvme subsystem 00:23:35.993 treq: not specified, sq flow control disable supported 00:23:35.993 portid: 1 00:23:35.993 trsvcid: 4420 00:23:35.993 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:35.993 traddr: 10.0.0.1 00:23:35.993 eflags: none 00:23:35.993 sectype: none 00:23:35.993 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:35.993 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:35.993 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.259 ===================================================== 00:23:36.259 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:36.259 ===================================================== 00:23:36.259 Controller Capabilities/Features 00:23:36.259 ================================ 00:23:36.259 Vendor ID: 0000 00:23:36.259 Subsystem Vendor ID: 0000 00:23:36.259 Serial Number: a855e8ce74a546009bee 00:23:36.259 Model Number: Linux 00:23:36.259 Firmware Version: 6.7.0-68 00:23:36.259 Recommended Arb Burst: 0 00:23:36.259 IEEE OUI Identifier: 00 00 00 00:23:36.259 Multi-path I/O 00:23:36.259 May have multiple subsystem ports: No 00:23:36.259 May have multiple controllers: No 00:23:36.259 Associated with SR-IOV VF: No 00:23:36.259 Max Data Transfer Size: Unlimited 00:23:36.259 Max Number of Namespaces: 0 00:23:36.259 Max Number of I/O Queues: 1024 00:23:36.259 NVMe Specification Version (VS): 1.3 00:23:36.259 NVMe Specification Version (Identify): 1.3 00:23:36.259 Maximum Queue Entries: 1024 00:23:36.259 Contiguous Queues Required: No 00:23:36.259 Arbitration Mechanisms Supported 00:23:36.259 Weighted Round Robin: Not Supported 00:23:36.259 Vendor Specific: Not Supported 00:23:36.259 Reset Timeout: 7500 ms 00:23:36.259 Doorbell Stride: 4 bytes 00:23:36.259 NVM Subsystem Reset: Not Supported 00:23:36.259 Command Sets Supported 00:23:36.259 NVM Command Set: Supported 00:23:36.259 Boot Partition: Not Supported 00:23:36.259 Memory Page Size Minimum: 4096 bytes 00:23:36.259 Memory Page Size Maximum: 4096 bytes 00:23:36.259 Persistent Memory Region: Not Supported 00:23:36.259 Optional Asynchronous Events Supported 00:23:36.259 Namespace Attribute Notices: Not Supported 00:23:36.259 Firmware Activation Notices: Not Supported 00:23:36.259 ANA Change Notices: Not Supported 00:23:36.259 PLE Aggregate Log Change Notices: Not Supported 00:23:36.259 LBA Status Info Alert Notices: Not Supported 00:23:36.259 EGE Aggregate Log Change Notices: Not Supported 00:23:36.259 Normal NVM Subsystem Shutdown event: Not Supported 00:23:36.259 Zone Descriptor Change Notices: Not Supported 00:23:36.259 Discovery Log Change Notices: Supported 00:23:36.259 Controller Attributes 00:23:36.259 128-bit Host Identifier: Not Supported 00:23:36.259 Non-Operational Permissive Mode: Not Supported 00:23:36.259 NVM Sets: Not Supported 00:23:36.259 Read Recovery Levels: Not Supported 00:23:36.259 Endurance Groups: Not Supported 00:23:36.259 Predictable Latency Mode: Not Supported 00:23:36.259 Traffic Based Keep ALive: Not Supported 00:23:36.259 Namespace Granularity: Not Supported 00:23:36.259 SQ Associations: Not Supported 00:23:36.259 UUID List: Not Supported 00:23:36.259 Multi-Domain Subsystem: Not Supported 00:23:36.259 Fixed Capacity Management: Not Supported 00:23:36.259 Variable Capacity Management: Not Supported 00:23:36.259 Delete Endurance Group: Not Supported 00:23:36.259 Delete NVM Set: Not Supported 00:23:36.259 Extended LBA Formats Supported: Not Supported 00:23:36.259 Flexible Data Placement Supported: Not Supported 00:23:36.259 00:23:36.259 Controller Memory Buffer Support 00:23:36.259 ================================ 00:23:36.259 Supported: No 00:23:36.259 00:23:36.259 Persistent Memory Region Support 00:23:36.259 ================================ 00:23:36.259 Supported: No 00:23:36.259 00:23:36.259 Admin Command Set Attributes 00:23:36.259 ============================ 00:23:36.259 Security Send/Receive: Not Supported 00:23:36.259 Format NVM: Not Supported 00:23:36.259 Firmware Activate/Download: Not Supported 00:23:36.259 Namespace Management: Not Supported 00:23:36.259 Device Self-Test: Not Supported 00:23:36.259 Directives: Not Supported 00:23:36.259 NVMe-MI: Not Supported 00:23:36.259 Virtualization Management: Not Supported 00:23:36.259 Doorbell Buffer Config: Not Supported 00:23:36.259 Get LBA Status Capability: Not Supported 00:23:36.259 Command & Feature Lockdown Capability: Not Supported 00:23:36.259 Abort Command Limit: 1 00:23:36.259 Async Event Request Limit: 1 00:23:36.259 Number of Firmware Slots: N/A 00:23:36.259 Firmware Slot 1 Read-Only: N/A 00:23:36.259 Firmware Activation Without Reset: N/A 00:23:36.259 Multiple Update Detection Support: N/A 00:23:36.259 Firmware Update Granularity: No Information Provided 00:23:36.259 Per-Namespace SMART Log: No 00:23:36.259 Asymmetric Namespace Access Log Page: Not Supported 00:23:36.259 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:36.259 Command Effects Log Page: Not Supported 00:23:36.259 Get Log Page Extended Data: Supported 00:23:36.259 Telemetry Log Pages: Not Supported 00:23:36.259 Persistent Event Log Pages: Not Supported 00:23:36.259 Supported Log Pages Log Page: May Support 00:23:36.259 Commands Supported & Effects Log Page: Not Supported 00:23:36.259 Feature Identifiers & Effects Log Page:May Support 00:23:36.259 NVMe-MI Commands & Effects Log Page: May Support 00:23:36.259 Data Area 4 for Telemetry Log: Not Supported 00:23:36.259 Error Log Page Entries Supported: 1 00:23:36.259 Keep Alive: Not Supported 00:23:36.259 00:23:36.259 NVM Command Set Attributes 00:23:36.259 ========================== 00:23:36.259 Submission Queue Entry Size 00:23:36.259 Max: 1 00:23:36.259 Min: 1 00:23:36.259 Completion Queue Entry Size 00:23:36.259 Max: 1 00:23:36.259 Min: 1 00:23:36.259 Number of Namespaces: 0 00:23:36.259 Compare Command: Not Supported 00:23:36.259 Write Uncorrectable Command: Not Supported 00:23:36.259 Dataset Management Command: Not Supported 00:23:36.259 Write Zeroes Command: Not Supported 00:23:36.259 Set Features Save Field: Not Supported 00:23:36.259 Reservations: Not Supported 00:23:36.259 Timestamp: Not Supported 00:23:36.259 Copy: Not Supported 00:23:36.259 Volatile Write Cache: Not Present 00:23:36.259 Atomic Write Unit (Normal): 1 00:23:36.259 Atomic Write Unit (PFail): 1 00:23:36.259 Atomic Compare & Write Unit: 1 00:23:36.259 Fused Compare & Write: Not Supported 00:23:36.259 Scatter-Gather List 00:23:36.259 SGL Command Set: Supported 00:23:36.259 SGL Keyed: Not Supported 00:23:36.259 SGL Bit Bucket Descriptor: Not Supported 00:23:36.259 SGL Metadata Pointer: Not Supported 00:23:36.259 Oversized SGL: Not Supported 00:23:36.259 SGL Metadata Address: Not Supported 00:23:36.259 SGL Offset: Supported 00:23:36.259 Transport SGL Data Block: Not Supported 00:23:36.259 Replay Protected Memory Block: Not Supported 00:23:36.259 00:23:36.259 Firmware Slot Information 00:23:36.259 ========================= 00:23:36.259 Active slot: 0 00:23:36.259 00:23:36.259 00:23:36.259 Error Log 00:23:36.259 ========= 00:23:36.259 00:23:36.259 Active Namespaces 00:23:36.259 ================= 00:23:36.259 Discovery Log Page 00:23:36.259 ================== 00:23:36.259 Generation Counter: 2 00:23:36.259 Number of Records: 2 00:23:36.259 Record Format: 0 00:23:36.259 00:23:36.259 Discovery Log Entry 0 00:23:36.259 ---------------------- 00:23:36.259 Transport Type: 3 (TCP) 00:23:36.259 Address Family: 1 (IPv4) 00:23:36.259 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:36.259 Entry Flags: 00:23:36.259 Duplicate Returned Information: 0 00:23:36.259 Explicit Persistent Connection Support for Discovery: 0 00:23:36.259 Transport Requirements: 00:23:36.259 Secure Channel: Not Specified 00:23:36.259 Port ID: 1 (0x0001) 00:23:36.259 Controller ID: 65535 (0xffff) 00:23:36.259 Admin Max SQ Size: 32 00:23:36.259 Transport Service Identifier: 4420 00:23:36.259 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:36.259 Transport Address: 10.0.0.1 00:23:36.259 Discovery Log Entry 1 00:23:36.259 ---------------------- 00:23:36.259 Transport Type: 3 (TCP) 00:23:36.259 Address Family: 1 (IPv4) 00:23:36.259 Subsystem Type: 2 (NVM Subsystem) 00:23:36.259 Entry Flags: 00:23:36.259 Duplicate Returned Information: 0 00:23:36.259 Explicit Persistent Connection Support for Discovery: 0 00:23:36.259 Transport Requirements: 00:23:36.259 Secure Channel: Not Specified 00:23:36.259 Port ID: 1 (0x0001) 00:23:36.259 Controller ID: 65535 (0xffff) 00:23:36.259 Admin Max SQ Size: 32 00:23:36.259 Transport Service Identifier: 4420 00:23:36.259 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:36.259 Transport Address: 10.0.0.1 00:23:36.259 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:36.259 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.259 get_feature(0x01) failed 00:23:36.259 get_feature(0x02) failed 00:23:36.259 get_feature(0x04) failed 00:23:36.259 ===================================================== 00:23:36.259 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:36.259 ===================================================== 00:23:36.259 Controller Capabilities/Features 00:23:36.259 ================================ 00:23:36.259 Vendor ID: 0000 00:23:36.259 Subsystem Vendor ID: 0000 00:23:36.259 Serial Number: def19c6174d09002be0d 00:23:36.259 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:36.259 Firmware Version: 6.7.0-68 00:23:36.259 Recommended Arb Burst: 6 00:23:36.259 IEEE OUI Identifier: 00 00 00 00:23:36.259 Multi-path I/O 00:23:36.259 May have multiple subsystem ports: Yes 00:23:36.259 May have multiple controllers: Yes 00:23:36.259 Associated with SR-IOV VF: No 00:23:36.259 Max Data Transfer Size: Unlimited 00:23:36.259 Max Number of Namespaces: 1024 00:23:36.259 Max Number of I/O Queues: 128 00:23:36.259 NVMe Specification Version (VS): 1.3 00:23:36.259 NVMe Specification Version (Identify): 1.3 00:23:36.259 Maximum Queue Entries: 1024 00:23:36.259 Contiguous Queues Required: No 00:23:36.259 Arbitration Mechanisms Supported 00:23:36.259 Weighted Round Robin: Not Supported 00:23:36.259 Vendor Specific: Not Supported 00:23:36.259 Reset Timeout: 7500 ms 00:23:36.259 Doorbell Stride: 4 bytes 00:23:36.259 NVM Subsystem Reset: Not Supported 00:23:36.259 Command Sets Supported 00:23:36.259 NVM Command Set: Supported 00:23:36.259 Boot Partition: Not Supported 00:23:36.259 Memory Page Size Minimum: 4096 bytes 00:23:36.259 Memory Page Size Maximum: 4096 bytes 00:23:36.259 Persistent Memory Region: Not Supported 00:23:36.259 Optional Asynchronous Events Supported 00:23:36.259 Namespace Attribute Notices: Supported 00:23:36.259 Firmware Activation Notices: Not Supported 00:23:36.259 ANA Change Notices: Supported 00:23:36.259 PLE Aggregate Log Change Notices: Not Supported 00:23:36.259 LBA Status Info Alert Notices: Not Supported 00:23:36.259 EGE Aggregate Log Change Notices: Not Supported 00:23:36.259 Normal NVM Subsystem Shutdown event: Not Supported 00:23:36.259 Zone Descriptor Change Notices: Not Supported 00:23:36.259 Discovery Log Change Notices: Not Supported 00:23:36.259 Controller Attributes 00:23:36.259 128-bit Host Identifier: Supported 00:23:36.260 Non-Operational Permissive Mode: Not Supported 00:23:36.260 NVM Sets: Not Supported 00:23:36.260 Read Recovery Levels: Not Supported 00:23:36.260 Endurance Groups: Not Supported 00:23:36.260 Predictable Latency Mode: Not Supported 00:23:36.260 Traffic Based Keep ALive: Supported 00:23:36.260 Namespace Granularity: Not Supported 00:23:36.260 SQ Associations: Not Supported 00:23:36.260 UUID List: Not Supported 00:23:36.260 Multi-Domain Subsystem: Not Supported 00:23:36.260 Fixed Capacity Management: Not Supported 00:23:36.260 Variable Capacity Management: Not Supported 00:23:36.260 Delete Endurance Group: Not Supported 00:23:36.260 Delete NVM Set: Not Supported 00:23:36.260 Extended LBA Formats Supported: Not Supported 00:23:36.260 Flexible Data Placement Supported: Not Supported 00:23:36.260 00:23:36.260 Controller Memory Buffer Support 00:23:36.260 ================================ 00:23:36.260 Supported: No 00:23:36.260 00:23:36.260 Persistent Memory Region Support 00:23:36.260 ================================ 00:23:36.260 Supported: No 00:23:36.260 00:23:36.260 Admin Command Set Attributes 00:23:36.260 ============================ 00:23:36.260 Security Send/Receive: Not Supported 00:23:36.260 Format NVM: Not Supported 00:23:36.260 Firmware Activate/Download: Not Supported 00:23:36.260 Namespace Management: Not Supported 00:23:36.260 Device Self-Test: Not Supported 00:23:36.260 Directives: Not Supported 00:23:36.260 NVMe-MI: Not Supported 00:23:36.260 Virtualization Management: Not Supported 00:23:36.260 Doorbell Buffer Config: Not Supported 00:23:36.260 Get LBA Status Capability: Not Supported 00:23:36.260 Command & Feature Lockdown Capability: Not Supported 00:23:36.260 Abort Command Limit: 4 00:23:36.260 Async Event Request Limit: 4 00:23:36.260 Number of Firmware Slots: N/A 00:23:36.260 Firmware Slot 1 Read-Only: N/A 00:23:36.260 Firmware Activation Without Reset: N/A 00:23:36.260 Multiple Update Detection Support: N/A 00:23:36.260 Firmware Update Granularity: No Information Provided 00:23:36.260 Per-Namespace SMART Log: Yes 00:23:36.260 Asymmetric Namespace Access Log Page: Supported 00:23:36.260 ANA Transition Time : 10 sec 00:23:36.260 00:23:36.260 Asymmetric Namespace Access Capabilities 00:23:36.260 ANA Optimized State : Supported 00:23:36.260 ANA Non-Optimized State : Supported 00:23:36.260 ANA Inaccessible State : Supported 00:23:36.260 ANA Persistent Loss State : Supported 00:23:36.260 ANA Change State : Supported 00:23:36.260 ANAGRPID is not changed : No 00:23:36.260 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:36.260 00:23:36.260 ANA Group Identifier Maximum : 128 00:23:36.260 Number of ANA Group Identifiers : 128 00:23:36.260 Max Number of Allowed Namespaces : 1024 00:23:36.260 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:36.260 Command Effects Log Page: Supported 00:23:36.260 Get Log Page Extended Data: Supported 00:23:36.260 Telemetry Log Pages: Not Supported 00:23:36.260 Persistent Event Log Pages: Not Supported 00:23:36.260 Supported Log Pages Log Page: May Support 00:23:36.260 Commands Supported & Effects Log Page: Not Supported 00:23:36.260 Feature Identifiers & Effects Log Page:May Support 00:23:36.260 NVMe-MI Commands & Effects Log Page: May Support 00:23:36.260 Data Area 4 for Telemetry Log: Not Supported 00:23:36.260 Error Log Page Entries Supported: 128 00:23:36.260 Keep Alive: Supported 00:23:36.260 Keep Alive Granularity: 1000 ms 00:23:36.260 00:23:36.260 NVM Command Set Attributes 00:23:36.260 ========================== 00:23:36.260 Submission Queue Entry Size 00:23:36.260 Max: 64 00:23:36.260 Min: 64 00:23:36.260 Completion Queue Entry Size 00:23:36.260 Max: 16 00:23:36.260 Min: 16 00:23:36.260 Number of Namespaces: 1024 00:23:36.260 Compare Command: Not Supported 00:23:36.260 Write Uncorrectable Command: Not Supported 00:23:36.260 Dataset Management Command: Supported 00:23:36.260 Write Zeroes Command: Supported 00:23:36.260 Set Features Save Field: Not Supported 00:23:36.260 Reservations: Not Supported 00:23:36.260 Timestamp: Not Supported 00:23:36.260 Copy: Not Supported 00:23:36.260 Volatile Write Cache: Present 00:23:36.260 Atomic Write Unit (Normal): 1 00:23:36.260 Atomic Write Unit (PFail): 1 00:23:36.260 Atomic Compare & Write Unit: 1 00:23:36.260 Fused Compare & Write: Not Supported 00:23:36.260 Scatter-Gather List 00:23:36.260 SGL Command Set: Supported 00:23:36.260 SGL Keyed: Not Supported 00:23:36.260 SGL Bit Bucket Descriptor: Not Supported 00:23:36.260 SGL Metadata Pointer: Not Supported 00:23:36.260 Oversized SGL: Not Supported 00:23:36.260 SGL Metadata Address: Not Supported 00:23:36.260 SGL Offset: Supported 00:23:36.260 Transport SGL Data Block: Not Supported 00:23:36.260 Replay Protected Memory Block: Not Supported 00:23:36.260 00:23:36.260 Firmware Slot Information 00:23:36.260 ========================= 00:23:36.260 Active slot: 0 00:23:36.260 00:23:36.260 Asymmetric Namespace Access 00:23:36.260 =========================== 00:23:36.260 Change Count : 0 00:23:36.260 Number of ANA Group Descriptors : 1 00:23:36.260 ANA Group Descriptor : 0 00:23:36.260 ANA Group ID : 1 00:23:36.260 Number of NSID Values : 1 00:23:36.260 Change Count : 0 00:23:36.260 ANA State : 1 00:23:36.260 Namespace Identifier : 1 00:23:36.260 00:23:36.260 Commands Supported and Effects 00:23:36.260 ============================== 00:23:36.260 Admin Commands 00:23:36.260 -------------- 00:23:36.260 Get Log Page (02h): Supported 00:23:36.260 Identify (06h): Supported 00:23:36.260 Abort (08h): Supported 00:23:36.260 Set Features (09h): Supported 00:23:36.260 Get Features (0Ah): Supported 00:23:36.260 Asynchronous Event Request (0Ch): Supported 00:23:36.260 Keep Alive (18h): Supported 00:23:36.260 I/O Commands 00:23:36.260 ------------ 00:23:36.260 Flush (00h): Supported 00:23:36.260 Write (01h): Supported LBA-Change 00:23:36.260 Read (02h): Supported 00:23:36.260 Write Zeroes (08h): Supported LBA-Change 00:23:36.260 Dataset Management (09h): Supported 00:23:36.260 00:23:36.260 Error Log 00:23:36.260 ========= 00:23:36.260 Entry: 0 00:23:36.260 Error Count: 0x3 00:23:36.260 Submission Queue Id: 0x0 00:23:36.260 Command Id: 0x5 00:23:36.260 Phase Bit: 0 00:23:36.260 Status Code: 0x2 00:23:36.260 Status Code Type: 0x0 00:23:36.260 Do Not Retry: 1 00:23:36.260 Error Location: 0x28 00:23:36.260 LBA: 0x0 00:23:36.260 Namespace: 0x0 00:23:36.260 Vendor Log Page: 0x0 00:23:36.260 ----------- 00:23:36.260 Entry: 1 00:23:36.260 Error Count: 0x2 00:23:36.260 Submission Queue Id: 0x0 00:23:36.260 Command Id: 0x5 00:23:36.260 Phase Bit: 0 00:23:36.260 Status Code: 0x2 00:23:36.260 Status Code Type: 0x0 00:23:36.260 Do Not Retry: 1 00:23:36.260 Error Location: 0x28 00:23:36.260 LBA: 0x0 00:23:36.260 Namespace: 0x0 00:23:36.260 Vendor Log Page: 0x0 00:23:36.260 ----------- 00:23:36.260 Entry: 2 00:23:36.260 Error Count: 0x1 00:23:36.260 Submission Queue Id: 0x0 00:23:36.260 Command Id: 0x4 00:23:36.260 Phase Bit: 0 00:23:36.260 Status Code: 0x2 00:23:36.260 Status Code Type: 0x0 00:23:36.260 Do Not Retry: 1 00:23:36.260 Error Location: 0x28 00:23:36.260 LBA: 0x0 00:23:36.260 Namespace: 0x0 00:23:36.260 Vendor Log Page: 0x0 00:23:36.260 00:23:36.260 Number of Queues 00:23:36.260 ================ 00:23:36.260 Number of I/O Submission Queues: 128 00:23:36.260 Number of I/O Completion Queues: 128 00:23:36.260 00:23:36.260 ZNS Specific Controller Data 00:23:36.260 ============================ 00:23:36.260 Zone Append Size Limit: 0 00:23:36.260 00:23:36.260 00:23:36.260 Active Namespaces 00:23:36.260 ================= 00:23:36.260 get_feature(0x05) failed 00:23:36.260 Namespace ID:1 00:23:36.260 Command Set Identifier: NVM (00h) 00:23:36.260 Deallocate: Supported 00:23:36.260 Deallocated/Unwritten Error: Not Supported 00:23:36.260 Deallocated Read Value: Unknown 00:23:36.260 Deallocate in Write Zeroes: Not Supported 00:23:36.260 Deallocated Guard Field: 0xFFFF 00:23:36.260 Flush: Supported 00:23:36.260 Reservation: Not Supported 00:23:36.260 Namespace Sharing Capabilities: Multiple Controllers 00:23:36.260 Size (in LBAs): 1953525168 (931GiB) 00:23:36.260 Capacity (in LBAs): 1953525168 (931GiB) 00:23:36.260 Utilization (in LBAs): 1953525168 (931GiB) 00:23:36.260 UUID: 08585601-3a40-4799-b4e2-500ac9aefcb4 00:23:36.260 Thin Provisioning: Not Supported 00:23:36.260 Per-NS Atomic Units: Yes 00:23:36.260 Atomic Boundary Size (Normal): 0 00:23:36.260 Atomic Boundary Size (PFail): 0 00:23:36.260 Atomic Boundary Offset: 0 00:23:36.260 NGUID/EUI64 Never Reused: No 00:23:36.260 ANA group ID: 1 00:23:36.260 Namespace Write Protected: No 00:23:36.260 Number of LBA Formats: 1 00:23:36.260 Current LBA Format: LBA Format #00 00:23:36.260 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:36.260 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.260 rmmod nvme_tcp 00:23:36.260 rmmod nvme_fabrics 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.260 01:00:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.197 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.197 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:38.197 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:38.197 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:38.459 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:38.460 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:38.460 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:38.460 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:38.460 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:38.460 01:00:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:38.460 01:00:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:39.409 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:39.409 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:39.409 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:39.409 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:39.409 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:39.409 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:39.674 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:39.674 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:39.674 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:40.618 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:40.618 00:23:40.618 real 0m9.310s 00:23:40.618 user 0m1.914s 00:23:40.618 sys 0m3.294s 00:23:40.618 01:00:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.618 01:00:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.618 ************************************ 00:23:40.618 END TEST nvmf_identify_kernel_target 00:23:40.618 ************************************ 00:23:40.618 01:00:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:40.618 01:00:15 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:40.618 01:00:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:40.618 01:00:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.618 01:00:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.618 ************************************ 00:23:40.618 START TEST nvmf_auth_host 00:23:40.618 ************************************ 00:23:40.618 01:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:40.618 * Looking for test storage... 00:23:40.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.876 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.876 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:40.876 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.876 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.876 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.876 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.876 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:40.877 01:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.774 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:42.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:42.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:42.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:42.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:42.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:23:42.775 00:23:42.775 --- 10.0.0.2 ping statistics --- 00:23:42.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.775 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:23:42.775 00:23:42.775 --- 10.0.0.1 ping statistics --- 00:23:42.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.775 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2730248 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2730248 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2730248 ']' 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.775 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.053 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.053 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:43.053 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.053 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.053 01:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ddbd3f5bc1374a09f8e91e277e7cfc47 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Foe 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ddbd3f5bc1374a09f8e91e277e7cfc47 0 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ddbd3f5bc1374a09f8e91e277e7cfc47 0 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ddbd3f5bc1374a09f8e91e277e7cfc47 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Foe 00:23:43.346 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Foe 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Foe 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=60a494dc539ef4c0e81046c8599d9e06f22dd91ef11076821ea9bd866b9d1d32 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cve 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 60a494dc539ef4c0e81046c8599d9e06f22dd91ef11076821ea9bd866b9d1d32 3 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 60a494dc539ef4c0e81046c8599d9e06f22dd91ef11076821ea9bd866b9d1d32 3 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=60a494dc539ef4c0e81046c8599d9e06f22dd91ef11076821ea9bd866b9d1d32 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cve 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cve 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cve 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e76c47c313b0297e9a0807789e11c93f786dae45883ccc2d 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XTK 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e76c47c313b0297e9a0807789e11c93f786dae45883ccc2d 0 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e76c47c313b0297e9a0807789e11c93f786dae45883ccc2d 0 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e76c47c313b0297e9a0807789e11c93f786dae45883ccc2d 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XTK 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XTK 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XTK 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e8acf98475593a2c65714bd1cea96ef9ac7641f1110ee37 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nBF 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e8acf98475593a2c65714bd1cea96ef9ac7641f1110ee37 2 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e8acf98475593a2c65714bd1cea96ef9ac7641f1110ee37 2 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e8acf98475593a2c65714bd1cea96ef9ac7641f1110ee37 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:43.347 01:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nBF 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nBF 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.nBF 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b05a5204c4a6ccd7097053652e7ed8e7 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.WAa 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b05a5204c4a6ccd7097053652e7ed8e7 1 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b05a5204c4a6ccd7097053652e7ed8e7 1 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b05a5204c4a6ccd7097053652e7ed8e7 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.WAa 00:23:43.347 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.WAa 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.WAa 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90d4c65746be97f84d537cfbc69f3401 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eNz 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90d4c65746be97f84d537cfbc69f3401 1 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90d4c65746be97f84d537cfbc69f3401 1 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90d4c65746be97f84d537cfbc69f3401 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eNz 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eNz 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.eNz 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=32fff94060bdc03d22e74121bc1b3b811901cc8f7d7553b5 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RJL 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 32fff94060bdc03d22e74121bc1b3b811901cc8f7d7553b5 2 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 32fff94060bdc03d22e74121bc1b3b811901cc8f7d7553b5 2 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=32fff94060bdc03d22e74121bc1b3b811901cc8f7d7553b5 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RJL 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RJL 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RJL 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ba5502a718c65482df373d9f9551dee 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OMO 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ba5502a718c65482df373d9f9551dee 0 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ba5502a718c65482df373d9f9551dee 0 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ba5502a718c65482df373d9f9551dee 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OMO 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OMO 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OMO 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ee7aeb4b6ba3fef5e5ba6a938267b79367ddda5c755b18c0fc56ff60c8c7ae6a 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7mr 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ee7aeb4b6ba3fef5e5ba6a938267b79367ddda5c755b18c0fc56ff60c8c7ae6a 3 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ee7aeb4b6ba3fef5e5ba6a938267b79367ddda5c755b18c0fc56ff60c8c7ae6a 3 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ee7aeb4b6ba3fef5e5ba6a938267b79367ddda5c755b18c0fc56ff60c8c7ae6a 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7mr 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7mr 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7mr 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2730248 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2730248 ']' 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.606 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Foe 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cve ]] 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cve 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XTK 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.nBF ]] 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nBF 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.WAa 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.864 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.eNz ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eNz 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RJL 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OMO ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OMO 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7mr 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:44.122 01:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:45.083 Waiting for block devices as requested 00:23:45.083 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:45.341 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:45.598 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:45.598 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:45.598 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:45.598 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:45.855 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:45.855 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:45.855 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:45.855 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:46.123 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:46.123 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:46.123 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:46.123 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:46.379 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:46.379 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:46.379 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:46.943 No valid GPT data, bailing 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:46.943 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:46.943 00:23:46.943 Discovery Log Number of Records 2, Generation counter 2 00:23:46.943 =====Discovery Log Entry 0====== 00:23:46.943 trtype: tcp 00:23:46.943 adrfam: ipv4 00:23:46.943 subtype: current discovery subsystem 00:23:46.943 treq: not specified, sq flow control disable supported 00:23:46.943 portid: 1 00:23:46.943 trsvcid: 4420 00:23:46.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:46.943 traddr: 10.0.0.1 00:23:46.943 eflags: none 00:23:46.943 sectype: none 00:23:46.943 =====Discovery Log Entry 1====== 00:23:46.944 trtype: tcp 00:23:46.944 adrfam: ipv4 00:23:46.944 subtype: nvme subsystem 00:23:46.944 treq: not specified, sq flow control disable supported 00:23:46.944 portid: 1 00:23:46.944 trsvcid: 4420 00:23:46.944 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:46.944 traddr: 10.0.0.1 00:23:46.944 eflags: none 00:23:46.944 sectype: none 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.944 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.202 nvme0n1 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.202 01:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.459 nvme0n1 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.459 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.460 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.716 nvme0n1 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.716 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.717 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.974 nvme0n1 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.974 nvme0n1 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.974 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.232 nvme0n1 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.232 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.233 01:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.233 01:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.233 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.233 01:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.490 nvme0n1 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.490 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 nvme0n1 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.747 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.005 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.005 nvme0n1 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.006 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:49.262 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.263 nvme0n1 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.263 01:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.263 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.263 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.263 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.263 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.520 nvme0n1 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.520 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.778 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.035 nvme0n1 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:50.035 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.036 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.293 nvme0n1 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:23:50.293 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.294 01:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.294 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.552 nvme0n1 00:23:50.552 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.552 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.552 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.552 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.552 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.810 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.068 nvme0n1 00:23:51.068 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.068 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.068 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.069 01:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 nvme0n1 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.327 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.585 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.148 nvme0n1 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:52.148 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.149 01:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.712 nvme0n1 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.712 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.276 nvme0n1 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.276 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.277 01:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.843 nvme0n1 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.843 01:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.408 nvme0n1 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.409 01:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.343 nvme0n1 00:23:55.343 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.343 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.343 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.343 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.343 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.600 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.600 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.600 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.600 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.601 01:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.632 nvme0n1 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.632 01:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.565 nvme0n1 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.565 01:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.498 nvme0n1 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.498 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.499 01:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.433 nvme0n1 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.433 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.691 nvme0n1 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.691 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.950 nvme0n1 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.950 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 nvme0n1 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.209 01:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.468 nvme0n1 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.468 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 nvme0n1 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.727 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 nvme0n1 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.985 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.986 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.244 nvme0n1 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.244 01:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.502 nvme0n1 00:24:01.502 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.503 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 nvme0n1 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:01.761 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.762 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.020 nvme0n1 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.020 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.021 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.279 nvme0n1 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.279 01:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.279 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.846 nvme0n1 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.846 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.847 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.105 nvme0n1 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.105 01:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.364 nvme0n1 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.364 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.936 nvme0n1 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.936 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.528 nvme0n1 00:24:04.528 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.528 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.528 01:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.528 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.528 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.528 01:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:04.528 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.529 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.786 nvme0n1 00:24:04.787 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.045 01:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.611 nvme0n1 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.611 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.175 nvme0n1 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.175 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.176 01:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.777 nvme0n1 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.777 01:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.708 nvme0n1 00:24:07.708 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.708 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.708 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.708 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.708 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.708 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.966 01:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.897 nvme0n1 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.897 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.898 01:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.833 nvme0n1 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.833 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.834 01:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.806 nvme0n1 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.806 01:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.739 nvme0n1 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.739 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.997 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.998 nvme0n1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.998 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.257 nvme0n1 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.257 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.258 01:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.516 nvme0n1 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.516 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.777 nvme0n1 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.777 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.036 nvme0n1 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.036 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.296 nvme0n1 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.296 01:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.554 nvme0n1 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.554 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.555 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.813 nvme0n1 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.813 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.814 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.814 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.814 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.071 nvme0n1 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.071 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.330 nvme0n1 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.330 01:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.590 nvme0n1 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.590 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.157 nvme0n1 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:15.157 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.158 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.416 nvme0n1 00:24:15.416 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.416 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.416 01:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.416 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.416 01:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.416 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.417 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.675 nvme0n1 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.675 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.676 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.244 nvme0n1 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.244 01:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.812 nvme0n1 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.812 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.813 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.382 nvme0n1 00:24:17.382 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.382 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.382 01:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.382 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.382 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.382 01:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.382 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.951 nvme0n1 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.951 01:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.530 nvme0n1 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.530 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.531 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.102 nvme0n1 00:24:19.102 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.102 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.102 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.102 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.102 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.102 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.359 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRiZDNmNWJjMTM3NGEwOWY4ZTkxZTI3N2U3Y2ZjNDewK9ul: 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: ]] 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhNDk0ZGM1MzllZjRjMGU4MTA0NmM4NTk5ZDllMDZmMjJkZDkxZWYxMTA3NjgyMWVhOWJkODY2YjlkMWQzMpDbC4M=: 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.360 01:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.297 nvme0n1 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.297 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.298 01:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.237 nvme0n1 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA1YTUyMDRjNGE2Y2NkNzA5NzA1MzY1MmU3ZWQ4ZTeV8gqs: 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkNGM2NTc0NmJlOTdmODRkNTM3Y2ZiYzY5ZjM0MDH6NQhc: 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.237 01:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.617 nvme0n1 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:22.617 01:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzJmZmY5NDA2MGJkYzAzZDIyZTc0MTIxYmMxYjNiODExOTAxY2M4ZjdkNzU1M2I1W+pjrw==: 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: ]] 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJhNTUwMmE3MThjNjU0ODJkZjM3M2Q5Zjk1NTFkZWUhv5xp: 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.617 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.555 nvme0n1 00:24:23.555 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.555 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.555 01:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.555 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.555 01:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU3YWViNGI2YmEzZmVmNWU1YmE2YTkzODI2N2I3OTM2N2RkZGE1Yzc1NWIxOGMwZmM1NmZmNjBjOGM3YWU2Yf8QU/w=: 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.555 01:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.515 nvme0n1 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc2YzQ3YzMxM2IwMjk3ZTlhMDgwNzc4OWUxMWM5M2Y3ODZkYWU0NTg4M2NjYzJkzvv6yA==: 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmU4YWNmOTg0NzU1OTNhMmM2NTcxNGJkMWNlYTk2ZWY5YWM3NjQxZjExMTBlZTM32ATjlA==: 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.515 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.516 request: 00:24:24.516 { 00:24:24.516 "name": "nvme0", 00:24:24.516 "trtype": "tcp", 00:24:24.516 "traddr": "10.0.0.1", 00:24:24.516 "adrfam": "ipv4", 00:24:24.516 "trsvcid": "4420", 00:24:24.516 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:24.516 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:24.516 "prchk_reftag": false, 00:24:24.516 "prchk_guard": false, 00:24:24.516 "hdgst": false, 00:24:24.516 "ddgst": false, 00:24:24.516 "method": "bdev_nvme_attach_controller", 00:24:24.516 "req_id": 1 00:24:24.516 } 00:24:24.516 Got JSON-RPC error response 00:24:24.516 response: 00:24:24.516 { 00:24:24.516 "code": -5, 00:24:24.516 "message": "Input/output error" 00:24:24.516 } 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.516 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.780 request: 00:24:24.780 { 00:24:24.780 "name": "nvme0", 00:24:24.780 "trtype": "tcp", 00:24:24.780 "traddr": "10.0.0.1", 00:24:24.780 "adrfam": "ipv4", 00:24:24.780 "trsvcid": "4420", 00:24:24.780 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:24.780 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:24.780 "prchk_reftag": false, 00:24:24.780 "prchk_guard": false, 00:24:24.780 "hdgst": false, 00:24:24.780 "ddgst": false, 00:24:24.780 "dhchap_key": "key2", 00:24:24.780 "method": "bdev_nvme_attach_controller", 00:24:24.780 "req_id": 1 00:24:24.780 } 00:24:24.780 Got JSON-RPC error response 00:24:24.780 response: 00:24:24.780 { 00:24:24.780 "code": -5, 00:24:24.780 "message": "Input/output error" 00:24:24.780 } 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.780 request: 00:24:24.780 { 00:24:24.780 "name": "nvme0", 00:24:24.780 "trtype": "tcp", 00:24:24.780 "traddr": "10.0.0.1", 00:24:24.780 "adrfam": "ipv4", 00:24:24.780 "trsvcid": "4420", 00:24:24.780 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:24.780 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:24.780 "prchk_reftag": false, 00:24:24.780 "prchk_guard": false, 00:24:24.780 "hdgst": false, 00:24:24.780 "ddgst": false, 00:24:24.780 "dhchap_key": "key1", 00:24:24.780 "dhchap_ctrlr_key": "ckey2", 00:24:24.780 "method": "bdev_nvme_attach_controller", 00:24:24.780 "req_id": 1 00:24:24.780 } 00:24:24.780 Got JSON-RPC error response 00:24:24.780 response: 00:24:24.780 { 00:24:24.780 "code": -5, 00:24:24.780 "message": "Input/output error" 00:24:24.780 } 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:24.780 rmmod nvme_tcp 00:24:24.780 rmmod nvme_fabrics 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2730248 ']' 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2730248 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2730248 ']' 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2730248 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2730248 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2730248' 00:24:24.780 killing process with pid 2730248 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2730248 00:24:24.780 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2730248 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.040 01:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:27.573 01:01:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:28.507 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:28.507 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:28.507 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:28.507 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:28.507 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:28.507 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:28.507 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:28.507 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:28.507 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:29.445 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:29.445 01:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Foe /tmp/spdk.key-null.XTK /tmp/spdk.key-sha256.WAa /tmp/spdk.key-sha384.RJL /tmp/spdk.key-sha512.7mr /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:29.445 01:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:30.823 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:30.823 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:30.823 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:30.823 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:30.823 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:30.823 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:30.823 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:30.823 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:30.823 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:30.823 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:30.823 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:30.823 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:30.823 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:30.823 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:30.823 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:30.823 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:30.823 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:30.823 00:24:30.823 real 0m50.052s 00:24:30.823 user 0m47.978s 00:24:30.823 sys 0m5.730s 00:24:30.823 01:01:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:30.823 01:01:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.823 ************************************ 00:24:30.823 END TEST nvmf_auth_host 00:24:30.823 ************************************ 00:24:30.823 01:01:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:30.824 01:01:05 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:30.824 01:01:05 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:30.824 01:01:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:30.824 01:01:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.824 01:01:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.824 ************************************ 00:24:30.824 START TEST nvmf_digest 00:24:30.824 ************************************ 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:30.824 * Looking for test storage... 00:24:30.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.824 01:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:33.355 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:33.355 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:33.355 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:33.355 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.355 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:24:33.356 00:24:33.356 --- 10.0.0.2 ping statistics --- 00:24:33.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.356 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:24:33.356 00:24:33.356 --- 10.0.0.1 ping statistics --- 00:24:33.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.356 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 ************************************ 00:24:33.356 START TEST nvmf_digest_clean 00:24:33.356 ************************************ 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2739800 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2739800 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2739800 ']' 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.356 01:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 [2024-07-16 01:01:07.831381] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:33.356 [2024-07-16 01:01:07.831450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.356 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.356 [2024-07-16 01:01:07.899407] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.356 [2024-07-16 01:01:08.018149] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.356 [2024-07-16 01:01:08.018211] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.356 [2024-07-16 01:01:08.018227] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.356 [2024-07-16 01:01:08.018240] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.356 [2024-07-16 01:01:08.018251] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.356 [2024-07-16 01:01:08.018287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.356 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.614 null0 00:24:33.614 [2024-07-16 01:01:08.214387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.614 [2024-07-16 01:01:08.238613] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2739825 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2739825 /var/tmp/bperf.sock 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2739825 ']' 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:33.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.614 01:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.614 [2024-07-16 01:01:08.289120] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:33.614 [2024-07-16 01:01:08.289204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739825 ] 00:24:33.614 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.614 [2024-07-16 01:01:08.355842] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.873 [2024-07-16 01:01:08.477063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.808 01:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.808 01:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:34.808 01:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:34.808 01:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:34.808 01:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:35.066 01:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.066 01:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.634 nvme0n1 00:24:35.634 01:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:35.634 01:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:35.634 Running I/O for 2 seconds... 00:24:37.538 00:24:37.538 Latency(us) 00:24:37.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.538 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:37.538 nvme0n1 : 2.00 19611.83 76.61 0.00 0.00 6518.74 2645.71 13786.83 00:24:37.538 =================================================================================================================== 00:24:37.538 Total : 19611.83 76.61 0.00 0.00 6518.74 2645.71 13786.83 00:24:37.538 0 00:24:37.538 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:37.538 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:37.538 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:37.538 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:37.538 | select(.opcode=="crc32c") 00:24:37.538 | "\(.module_name) \(.executed)"' 00:24:37.538 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2739825 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2739825 ']' 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2739825 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739825 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739825' 00:24:37.797 killing process with pid 2739825 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2739825 00:24:37.797 Received shutdown signal, test time was about 2.000000 seconds 00:24:37.797 00:24:37.797 Latency(us) 00:24:37.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.797 =================================================================================================================== 00:24:37.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.797 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2739825 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2740360 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2740360 /var/tmp/bperf.sock 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2740360 ']' 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:38.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.367 01:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:38.367 [2024-07-16 01:01:12.862763] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:38.367 [2024-07-16 01:01:12.862845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740360 ] 00:24:38.367 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:38.367 Zero copy mechanism will not be used. 00:24:38.367 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.367 [2024-07-16 01:01:12.923387] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.367 [2024-07-16 01:01:13.039889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.367 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.367 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:38.367 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:38.367 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:38.367 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:38.952 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.952 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.209 nvme0n1 00:24:39.209 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:39.209 01:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:39.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:39.467 Zero copy mechanism will not be used. 00:24:39.467 Running I/O for 2 seconds... 00:24:41.384 00:24:41.384 Latency(us) 00:24:41.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.384 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:41.384 nvme0n1 : 2.00 2492.41 311.55 0.00 0.00 6414.65 6068.15 10097.40 00:24:41.384 =================================================================================================================== 00:24:41.384 Total : 2492.41 311.55 0.00 0.00 6414.65 6068.15 10097.40 00:24:41.384 0 00:24:41.384 01:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:41.384 01:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:41.384 01:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:41.384 01:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:41.384 01:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:41.384 | select(.opcode=="crc32c") 00:24:41.384 | "\(.module_name) \(.executed)"' 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2740360 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2740360 ']' 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2740360 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2740360 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2740360' 00:24:41.695 killing process with pid 2740360 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2740360 00:24:41.695 Received shutdown signal, test time was about 2.000000 seconds 00:24:41.695 00:24:41.695 Latency(us) 00:24:41.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.695 =================================================================================================================== 00:24:41.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.695 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2740360 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2740887 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2740887 /var/tmp/bperf.sock 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2740887 ']' 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.954 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:41.954 [2024-07-16 01:01:16.596072] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:41.954 [2024-07-16 01:01:16.596166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740887 ] 00:24:41.954 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.954 [2024-07-16 01:01:16.655219] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.212 [2024-07-16 01:01:16.766124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.212 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.212 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:42.212 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:42.212 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:42.212 01:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:42.470 01:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.470 01:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.039 nvme0n1 00:24:43.039 01:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:43.039 01:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:43.039 Running I/O for 2 seconds... 00:24:44.943 00:24:44.943 Latency(us) 00:24:44.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.943 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:44.943 nvme0n1 : 2.00 20678.91 80.78 0.00 0.00 6180.31 2791.35 9951.76 00:24:44.943 =================================================================================================================== 00:24:44.943 Total : 20678.91 80.78 0.00 0.00 6180.31 2791.35 9951.76 00:24:44.943 0 00:24:44.943 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:44.943 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:44.943 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:44.943 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:44.943 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:44.943 | select(.opcode=="crc32c") 00:24:44.943 | "\(.module_name) \(.executed)"' 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2740887 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2740887 ']' 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2740887 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.201 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2740887 00:24:45.460 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.460 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.460 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2740887' 00:24:45.460 killing process with pid 2740887 00:24:45.460 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2740887 00:24:45.460 Received shutdown signal, test time was about 2.000000 seconds 00:24:45.460 00:24:45.460 Latency(us) 00:24:45.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.460 =================================================================================================================== 00:24:45.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.460 01:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2740887 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2741302 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2741302 /var/tmp/bperf.sock 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2741302 ']' 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:45.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.720 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:45.720 [2024-07-16 01:01:20.278789] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:45.720 [2024-07-16 01:01:20.278866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741302 ] 00:24:45.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:45.720 Zero copy mechanism will not be used. 00:24:45.720 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.720 [2024-07-16 01:01:20.339549] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.720 [2024-07-16 01:01:20.452580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.977 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.978 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:45.978 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:45.978 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:45.978 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:46.235 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.235 01:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.493 nvme0n1 00:24:46.493 01:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:46.493 01:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:46.493 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:46.493 Zero copy mechanism will not be used. 00:24:46.493 Running I/O for 2 seconds... 00:24:49.025 00:24:49.025 Latency(us) 00:24:49.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.025 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:49.025 nvme0n1 : 2.01 1674.15 209.27 0.00 0.00 9530.48 7330.32 16796.63 00:24:49.025 =================================================================================================================== 00:24:49.025 Total : 1674.15 209.27 0.00 0.00 9530.48 7330.32 16796.63 00:24:49.025 0 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:49.025 | select(.opcode=="crc32c") 00:24:49.025 | "\(.module_name) \(.executed)"' 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2741302 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2741302 ']' 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2741302 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2741302 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2741302' 00:24:49.025 killing process with pid 2741302 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2741302 00:24:49.025 Received shutdown signal, test time was about 2.000000 seconds 00:24:49.025 00:24:49.025 Latency(us) 00:24:49.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.025 =================================================================================================================== 00:24:49.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.025 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2741302 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2739800 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2739800 ']' 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2739800 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739800 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739800' 00:24:49.284 killing process with pid 2739800 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2739800 00:24:49.284 01:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2739800 00:24:49.543 00:24:49.543 real 0m16.342s 00:24:49.543 user 0m33.261s 00:24:49.543 sys 0m3.746s 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:49.543 ************************************ 00:24:49.543 END TEST nvmf_digest_clean 00:24:49.543 ************************************ 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:49.543 ************************************ 00:24:49.543 START TEST nvmf_digest_error 00:24:49.543 ************************************ 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2741733 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2741733 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2741733 ']' 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.543 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.543 [2024-07-16 01:01:24.235957] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:49.543 [2024-07-16 01:01:24.236034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.543 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.801 [2024-07-16 01:01:24.304918] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.801 [2024-07-16 01:01:24.420262] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.801 [2024-07-16 01:01:24.420330] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.801 [2024-07-16 01:01:24.420347] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.801 [2024-07-16 01:01:24.420359] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.801 [2024-07-16 01:01:24.420370] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.801 [2024-07-16 01:01:24.420400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.801 [2024-07-16 01:01:24.476954] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.801 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:50.058 null0 00:24:50.058 [2024-07-16 01:01:24.591504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.058 [2024-07-16 01:01:24.615728] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2741878 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2741878 /var/tmp/bperf.sock 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2741878 ']' 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.058 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.059 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.059 01:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:50.059 [2024-07-16 01:01:24.666481] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:50.059 [2024-07-16 01:01:24.666555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741878 ] 00:24:50.059 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.059 [2024-07-16 01:01:24.733619] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.317 [2024-07-16 01:01:24.850763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.885 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.885 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:50.885 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:50.885 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:51.142 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:51.142 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.142 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.142 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.142 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.142 01:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.711 nvme0n1 00:24:51.711 01:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:51.711 01:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.711 01:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.711 01:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.711 01:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:51.711 01:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:51.711 Running I/O for 2 seconds... 00:24:51.711 [2024-07-16 01:01:26.356468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.711 [2024-07-16 01:01:26.356519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.711 [2024-07-16 01:01:26.356542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.711 [2024-07-16 01:01:26.372222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.711 [2024-07-16 01:01:26.372259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.711 [2024-07-16 01:01:26.372279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.711 [2024-07-16 01:01:26.384461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.711 [2024-07-16 01:01:26.384502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.711 [2024-07-16 01:01:26.384523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.711 [2024-07-16 01:01:26.400028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.712 [2024-07-16 01:01:26.400057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.712 [2024-07-16 01:01:26.400088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.712 [2024-07-16 01:01:26.413170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.712 [2024-07-16 01:01:26.413204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.712 [2024-07-16 01:01:26.413223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.712 [2024-07-16 01:01:26.426472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.712 [2024-07-16 01:01:26.426506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.712 [2024-07-16 01:01:26.426525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.712 [2024-07-16 01:01:26.440435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.712 [2024-07-16 01:01:26.440470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.712 [2024-07-16 01:01:26.440489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.712 [2024-07-16 01:01:26.455178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.712 [2024-07-16 01:01:26.455205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.712 [2024-07-16 01:01:26.455221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.712 [2024-07-16 01:01:26.466967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.712 [2024-07-16 01:01:26.466997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.712 [2024-07-16 01:01:26.467014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.970 [2024-07-16 01:01:26.481103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.970 [2024-07-16 01:01:26.481132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.970 [2024-07-16 01:01:26.481163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.970 [2024-07-16 01:01:26.497124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.970 [2024-07-16 01:01:26.497156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.970 [2024-07-16 01:01:26.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.970 [2024-07-16 01:01:26.509873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.970 [2024-07-16 01:01:26.509928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.970 [2024-07-16 01:01:26.509944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.970 [2024-07-16 01:01:26.526039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.970 [2024-07-16 01:01:26.526082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.970 [2024-07-16 01:01:26.526099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.970 [2024-07-16 01:01:26.538208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.970 [2024-07-16 01:01:26.538254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.970 [2024-07-16 01:01:26.538273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.970 [2024-07-16 01:01:26.555870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.970 [2024-07-16 01:01:26.555914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.555933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.570043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.570074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.570091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.583654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.583688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.583706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.596973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.597004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.597020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.609795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.609830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.609849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.624520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.624554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.624585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.639036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.639068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.639086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.651693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.651725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.651743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.666298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.666330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.666347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.677375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.677410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.677429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.693111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.693140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.693171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.705301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.705336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.971 [2024-07-16 01:01:26.720583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:51.971 [2024-07-16 01:01:26.720618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.971 [2024-07-16 01:01:26.720636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.229 [2024-07-16 01:01:26.737495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.229 [2024-07-16 01:01:26.737530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-07-16 01:01:26.737548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.229 [2024-07-16 01:01:26.749657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.229 [2024-07-16 01:01:26.749690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-07-16 01:01:26.749709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.764784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.764818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.764837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.777072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.777102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.777119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.790749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.790782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.790802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.804970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.805001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.805017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.818112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.818140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.818171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.830182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.830240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.830258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.846331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.846366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.846384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.859236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.859269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.859295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.872713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.872746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.872765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.886436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.886471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.886491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.899340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.899374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.899392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.914209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.914245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.914265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.927355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.927389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.927408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.941429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.941464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.941483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.956029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.956058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.956090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.968976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.969007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.969023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.230 [2024-07-16 01:01:26.981899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.230 [2024-07-16 01:01:26.981952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-07-16 01:01:26.981970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:26.995056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:26.995088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:26.995104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.009227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.009261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.009280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.022339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.022372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.022391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.036602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.036635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.036654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.049513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.049547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.049566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.066058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.066100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.066117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.077025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.077053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.077085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.092441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.092475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.092493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.106818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.106852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.490 [2024-07-16 01:01:27.106871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.490 [2024-07-16 01:01:27.120435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.490 [2024-07-16 01:01:27.120470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.120489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.133346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.133381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.133399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.148046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.148078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.148095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.160902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.160947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.160964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.175259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.175292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.175310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.188727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.188761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.188779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.202845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.202886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.202908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.215252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.215285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.215310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.228413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.228450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.228468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.491 [2024-07-16 01:01:27.243890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.491 [2024-07-16 01:01:27.243936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.491 [2024-07-16 01:01:27.243951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.750 [2024-07-16 01:01:27.254237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.254265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.254280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.270090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.270118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.270133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.282742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.282776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.282794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.296080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.296111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.296127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.311936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.311967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.311984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.324684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.324719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.324738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.340008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.340042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.340059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.355563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.355597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.355615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.367483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.367516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.367535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.383505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.383539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.394838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.394872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.394901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.411055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.411083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.411116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.424510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.424544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.424562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.436986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.437014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.437030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.451763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.451796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.451820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.465197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.465231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.465250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.479302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.479336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.479354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.492441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.492475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.492493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.751 [2024-07-16 01:01:27.505621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:52.751 [2024-07-16 01:01:27.505655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.751 [2024-07-16 01:01:27.505673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.519422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.519456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.519474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.532847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.532889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.532911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.546765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.546798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.546817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.560145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.560174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.560205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.573997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.574033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.574050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.587963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.587991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.588007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.599713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.599745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.599764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.614093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.614124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.614141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.627944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.627974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.627991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.639980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.640008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.640023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.654587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.654623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.654642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.668360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.668395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.668414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.682703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.682738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.682756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.694220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.694254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.694273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.709707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.709741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.709759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.724870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.724928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.724945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.737395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.737431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.737450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.751749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.751783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.010 [2024-07-16 01:01:27.751802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.010 [2024-07-16 01:01:27.766167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.010 [2024-07-16 01:01:27.766198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.011 [2024-07-16 01:01:27.766232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.778459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.778494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.778512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.793051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.793099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.793114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.806972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.807008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.807030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.818736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.818769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.818788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.833856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.833898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.833932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.846893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.846939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.846957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.860798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.860832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.860850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.873271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.873305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.873323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.887354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.887388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.887406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.902261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.902295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.902313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.915195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.915230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.915249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.270 [2024-07-16 01:01:27.929125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.270 [2024-07-16 01:01:27.929160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.270 [2024-07-16 01:01:27.929178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.271 [2024-07-16 01:01:27.941316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.271 [2024-07-16 01:01:27.941349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.271 [2024-07-16 01:01:27.941367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.271 [2024-07-16 01:01:27.957065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.271 [2024-07-16 01:01:27.957096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.271 [2024-07-16 01:01:27.957112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.271 [2024-07-16 01:01:27.969235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.271 [2024-07-16 01:01:27.969270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.271 [2024-07-16 01:01:27.969288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.271 [2024-07-16 01:01:27.983562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.271 [2024-07-16 01:01:27.983595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.271 [2024-07-16 01:01:27.983614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.271 [2024-07-16 01:01:27.996407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.271 [2024-07-16 01:01:27.996440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.271 [2024-07-16 01:01:27.996459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.271 [2024-07-16 01:01:28.010809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.271 [2024-07-16 01:01:28.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.271 [2024-07-16 01:01:28.010861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.271 [2024-07-16 01:01:28.023814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.271 [2024-07-16 01:01:28.023848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.271 [2024-07-16 01:01:28.023866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.038445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.038480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.038504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.050400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.050434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.050452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.066261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.066295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.066313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.082178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.082227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.082246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.094007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.094035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.094051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.109015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.109047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.109064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.120774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.120805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.120822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.133907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.133938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.133955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.145287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.145318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.145335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.158859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.158911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.158930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.171422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.171453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.171470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.184855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.184895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.184914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.197000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.197029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.197059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.209666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.209696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.209713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.222628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.222659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.222676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.234983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.235014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.235030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.247839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.247870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.247896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.259288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.259318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.259335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.272207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.272238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.272254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.532 [2024-07-16 01:01:28.285774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.532 [2024-07-16 01:01:28.285804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.532 [2024-07-16 01:01:28.285821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.791 [2024-07-16 01:01:28.296764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.791 [2024-07-16 01:01:28.296793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.791 [2024-07-16 01:01:28.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.791 [2024-07-16 01:01:28.309770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.791 [2024-07-16 01:01:28.309801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.791 [2024-07-16 01:01:28.309818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.791 [2024-07-16 01:01:28.323533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.791 [2024-07-16 01:01:28.323565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.791 [2024-07-16 01:01:28.323581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.791 [2024-07-16 01:01:28.336986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aff00) 00:24:53.791 [2024-07-16 01:01:28.337016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.791 [2024-07-16 01:01:28.337032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.791 00:24:53.791 Latency(us) 00:24:53.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.791 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:53.791 nvme0n1 : 2.01 18687.43 73.00 0.00 0.00 6841.53 3228.25 18252.99 00:24:53.791 =================================================================================================================== 00:24:53.791 Total : 18687.43 73.00 0.00 0.00 6841.53 3228.25 18252.99 00:24:53.791 0 00:24:53.791 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:53.791 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:53.791 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:53.791 | .driver_specific 00:24:53.791 | .nvme_error 00:24:53.791 | .status_code 00:24:53.791 | .command_transient_transport_error' 00:24:53.791 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2741878 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2741878 ']' 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2741878 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2741878 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2741878' 00:24:54.050 killing process with pid 2741878 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2741878 00:24:54.050 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.050 00:24:54.050 Latency(us) 00:24:54.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.050 =================================================================================================================== 00:24:54.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.050 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2741878 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2742388 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2742388 /var/tmp/bperf.sock 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2742388 ']' 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.308 01:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.308 [2024-07-16 01:01:28.949600] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:54.308 [2024-07-16 01:01:28.949693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742388 ] 00:24:54.308 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:54.308 Zero copy mechanism will not be used. 00:24:54.308 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.308 [2024-07-16 01:01:29.009254] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.565 [2024-07-16 01:01:29.122314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.565 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.565 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:54.565 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:54.565 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:54.823 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:54.823 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.823 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.823 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.823 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.823 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.397 nvme0n1 00:24:55.397 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:55.397 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.397 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:55.397 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.397 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:55.397 01:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:55.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:55.397 Zero copy mechanism will not be used. 00:24:55.397 Running I/O for 2 seconds... 00:24:55.397 [2024-07-16 01:01:30.122485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.397 [2024-07-16 01:01:30.122561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.397 [2024-07-16 01:01:30.122583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.397 [2024-07-16 01:01:30.134559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.398 [2024-07-16 01:01:30.134591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.398 [2024-07-16 01:01:30.134623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.398 [2024-07-16 01:01:30.146345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.398 [2024-07-16 01:01:30.146375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.398 [2024-07-16 01:01:30.146408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.662 [2024-07-16 01:01:30.158089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.662 [2024-07-16 01:01:30.158130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.662 [2024-07-16 01:01:30.158148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.662 [2024-07-16 01:01:30.169792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.662 [2024-07-16 01:01:30.169822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.662 [2024-07-16 01:01:30.169853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.662 [2024-07-16 01:01:30.181586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.662 [2024-07-16 01:01:30.181616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.662 [2024-07-16 01:01:30.181632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.662 [2024-07-16 01:01:30.193349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.662 [2024-07-16 01:01:30.193379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.662 [2024-07-16 01:01:30.193410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.662 [2024-07-16 01:01:30.205114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.662 [2024-07-16 01:01:30.205158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.662 [2024-07-16 01:01:30.205175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.662 [2024-07-16 01:01:30.216947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.662 [2024-07-16 01:01:30.216977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.662 [2024-07-16 01:01:30.216994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.228695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.228725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.228743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.240529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.240559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.240576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.252251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.252280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.252312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.263992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.264020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.264037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.275756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.275800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.275816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.287636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.287664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.287695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.299419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.299462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.299479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.311631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.311659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.311691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.323345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.323374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.323406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.335064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.335094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.335110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.346762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.346792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.346807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.358551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.358595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.358620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.370703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.370745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.370762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.382425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.382468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.382484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.394206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.394236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.394269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.405961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.405991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.406008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-07-16 01:01:30.417612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.663 [2024-07-16 01:01:30.417643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-07-16 01:01:30.417661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.429478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.429509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.429526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.441485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.441515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.441549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.453187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.453217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.453234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.465002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.465054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.465072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.476768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.476797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.476828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.488692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.488736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.488752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.500700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.500743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.500760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.512518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.512547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.512563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.524448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.524491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.524508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.536485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.536514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.536530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.548257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.548300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.548316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.560130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.560160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.560176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.572004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.572048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.572064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.583871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.583906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.583938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.595722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.595750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.595767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.607790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.607833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.607849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.619745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.619774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.619807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.631703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.631745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.631762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.643803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.643844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.643861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.655692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.655738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.655755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.667652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.667688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.667720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.932 [2024-07-16 01:01:30.679820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:55.932 [2024-07-16 01:01:30.679851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.932 [2024-07-16 01:01:30.679867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.691743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.691773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.691790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.703681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.703710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.703741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.715707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.715750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.715766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.727612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.727641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.727673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.739405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.739433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.739465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.751478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.751520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.751537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.763325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.763354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.763370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.775171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.775218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.775235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.787134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.787164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.787182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.798931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.798962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.798979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.810885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.810914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.810931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.822763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.822807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.822823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.834682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.834711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.834743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.846449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.846478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.846510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.858164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.858209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.858226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.869990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.202 [2024-07-16 01:01:30.870019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.202 [2024-07-16 01:01:30.870044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.202 [2024-07-16 01:01:30.881798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.203 [2024-07-16 01:01:30.881827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.203 [2024-07-16 01:01:30.881844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.203 [2024-07-16 01:01:30.893633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.203 [2024-07-16 01:01:30.893661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.203 [2024-07-16 01:01:30.893693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.203 [2024-07-16 01:01:30.905404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.203 [2024-07-16 01:01:30.905446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.203 [2024-07-16 01:01:30.905463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.203 [2024-07-16 01:01:30.917180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.203 [2024-07-16 01:01:30.917211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.203 [2024-07-16 01:01:30.917228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.203 [2024-07-16 01:01:30.929058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.203 [2024-07-16 01:01:30.929089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.203 [2024-07-16 01:01:30.929106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.203 [2024-07-16 01:01:30.940764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.203 [2024-07-16 01:01:30.940792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.203 [2024-07-16 01:01:30.940809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.203 [2024-07-16 01:01:30.952755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.203 [2024-07-16 01:01:30.952798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.203 [2024-07-16 01:01:30.952815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.460 [2024-07-16 01:01:30.964589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.460 [2024-07-16 01:01:30.964632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.460 [2024-07-16 01:01:30.964649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.460 [2024-07-16 01:01:30.976593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.460 [2024-07-16 01:01:30.976646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.460 [2024-07-16 01:01:30.976678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.460 [2024-07-16 01:01:30.988395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.460 [2024-07-16 01:01:30.988423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.460 [2024-07-16 01:01:30.988439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.460 [2024-07-16 01:01:31.000231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.460 [2024-07-16 01:01:31.000259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.460 [2024-07-16 01:01:31.000291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.011976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.012005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.012022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.023645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.023673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.023689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.035598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.035641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.035659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.047713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.047755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.047771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.059696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.059724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.059740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.071652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.071682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.071723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.084523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.084556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.084581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.097458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.097492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.097511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.110358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.110391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.110410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.123336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.123369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.123388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.136423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.136455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.136474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.149592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.149625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.149644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.162764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.162797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.162815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.175816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.175852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.175871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.188936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.188971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.188989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.201922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.201951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.201967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.461 [2024-07-16 01:01:31.215044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.461 [2024-07-16 01:01:31.215074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.461 [2024-07-16 01:01:31.215090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.227966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.227995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.228012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.240776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.240826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.253694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.253726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.253744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.266710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.266743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.266761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.279622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.279675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.292626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.292659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.292677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.305685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.305716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.305734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.318648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.318680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.318699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.331617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.331649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.331667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.344559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.344592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.344610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.357446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.357479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.357498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.370490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.370521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.370539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.383606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.383639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.383657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.396519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.396550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.396569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.409526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.409558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.422496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.422528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.422547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.435602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.435635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.435654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.448546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.448578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.448597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.461477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.461510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.461530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.721 [2024-07-16 01:01:31.474311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.721 [2024-07-16 01:01:31.474343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.721 [2024-07-16 01:01:31.474361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.487248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.487281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.487301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.500064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.500092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.500124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.513117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.513146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.513163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.526055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.526084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.526115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.538817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.538849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.538868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.551779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.551810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.551829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.564656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.564688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.564706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.577658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.577690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.577708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.590721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.590754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.590772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.603715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.603747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.603765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.616705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.616738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.616756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.629609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.629641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.629666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.642613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.642646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.642664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.655538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.655571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-16 01:01:31.655590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.982 [2024-07-16 01:01:31.668540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.982 [2024-07-16 01:01:31.668572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-16 01:01:31.668590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.983 [2024-07-16 01:01:31.681471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.983 [2024-07-16 01:01:31.681502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-16 01:01:31.681521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.983 [2024-07-16 01:01:31.694459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.983 [2024-07-16 01:01:31.694494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-16 01:01:31.694514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.983 [2024-07-16 01:01:31.707430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.983 [2024-07-16 01:01:31.707462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-16 01:01:31.707481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.983 [2024-07-16 01:01:31.720327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.983 [2024-07-16 01:01:31.720359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-16 01:01:31.720378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.983 [2024-07-16 01:01:31.733235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:56.983 [2024-07-16 01:01:31.733281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-16 01:01:31.733299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.746118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.746168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.746185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.758952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.758996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.759012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.771758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.771790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.771808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.784774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.784806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.784825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.797730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.797760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.797779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.810600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.810632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.823439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.823470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.823488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.836264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.836296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.836315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.849092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.849120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.849151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.862176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.862222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.862240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.875021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.875049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.875082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.887932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.887960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.887977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.900748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.900781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.900801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.913647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.913680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.913698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.926624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.926655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.926673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.939672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.939705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.939724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.952648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.952682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.952701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.965871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.965909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.965934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.979004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.243 [2024-07-16 01:01:31.979047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.243 [2024-07-16 01:01:31.979065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.243 [2024-07-16 01:01:31.992005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.244 [2024-07-16 01:01:31.992035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.244 [2024-07-16 01:01:31.992051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.004866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.004927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.017962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.018005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.018022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.031156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.031203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.031221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.044016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.044058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.044074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.057366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.057399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.057418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.070227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.070272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.070291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.082714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.082748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.082767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.095727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.095760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.095779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.503 [2024-07-16 01:01:32.108490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19b06b0) 00:24:57.503 [2024-07-16 01:01:32.108524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.503 [2024-07-16 01:01:32.108542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.503 00:24:57.503 Latency(us) 00:24:57.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.503 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:57.503 nvme0n1 : 2.00 2492.87 311.61 0.00 0.00 6410.93 5728.33 14272.28 00:24:57.503 =================================================================================================================== 00:24:57.503 Total : 2492.87 311.61 0.00 0.00 6410.93 5728.33 14272.28 00:24:57.503 0 00:24:57.503 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:57.503 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:57.503 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:57.503 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:57.503 | .driver_specific 00:24:57.503 | .nvme_error 00:24:57.503 | .status_code 00:24:57.503 | .command_transient_transport_error' 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2742388 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2742388 ']' 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2742388 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2742388 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2742388' 00:24:57.761 killing process with pid 2742388 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2742388 00:24:57.761 Received shutdown signal, test time was about 2.000000 seconds 00:24:57.761 00:24:57.761 Latency(us) 00:24:57.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.761 =================================================================================================================== 00:24:57.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.761 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2742388 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2742823 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2742823 /var/tmp/bperf.sock 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2742823 ']' 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.019 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.020 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.020 01:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.020 [2024-07-16 01:01:32.702604] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:24:58.020 [2024-07-16 01:01:32.702682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742823 ] 00:24:58.020 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.020 [2024-07-16 01:01:32.769277] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.278 [2024-07-16 01:01:32.890423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.278 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.278 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:58.278 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:58.278 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:58.558 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:58.558 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.558 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.558 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.558 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.558 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.126 nvme0n1 00:24:59.126 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:59.126 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.126 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.126 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.126 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:59.126 01:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.126 Running I/O for 2 seconds... 00:24:59.126 [2024-07-16 01:01:33.748632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.748978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.749017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.763116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.763453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.763487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.777581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.777865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.777906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.791973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.792318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.792351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.806458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.806776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.806807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.820798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.821122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.821150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.835060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.835387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.835417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.849371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.849680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.849710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.863475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.863796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.863826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.126 [2024-07-16 01:01:33.877428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.126 [2024-07-16 01:01:33.877709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.126 [2024-07-16 01:01:33.877740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.891273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.891589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.891619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.905337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.905645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.905676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.919616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.919902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.919954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.933794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.934153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.934201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.947832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.948145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.948199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.961802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.962129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.962165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.975758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.976065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.976093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:33.989817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.384 [2024-07-16 01:01:33.990120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.384 [2024-07-16 01:01:33.990148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.384 [2024-07-16 01:01:34.003999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.004296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.004341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.018070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.018367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.018400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.032206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.032512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.032542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.046358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.046637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.046667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.060500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.060779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.060810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.074619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.074942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.074969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.088764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.089095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.089127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.102993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.103345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.103375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.117187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.117498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.117529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.385 [2024-07-16 01:01:34.131247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.385 [2024-07-16 01:01:34.131526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.385 [2024-07-16 01:01:34.131557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.145369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.145647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.145677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.159574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.159850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.159887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.173731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.174055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.174083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.187942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.188360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.188392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.202146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.202481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.202511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.216388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.216676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.216707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.230574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.230853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.230891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.244782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.245164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.245208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.258910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.259227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.259258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.273281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.273598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.273630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.287480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.287759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.287789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.301640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.301920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.301950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.315674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.315964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.315991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.329670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.329964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.329992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.343733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.643 [2024-07-16 01:01:34.344056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.643 [2024-07-16 01:01:34.344083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.643 [2024-07-16 01:01:34.357824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.644 [2024-07-16 01:01:34.358140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.644 [2024-07-16 01:01:34.358168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.644 [2024-07-16 01:01:34.372011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.644 [2024-07-16 01:01:34.372338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.644 [2024-07-16 01:01:34.372368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.644 [2024-07-16 01:01:34.385993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.644 [2024-07-16 01:01:34.386253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.644 [2024-07-16 01:01:34.386281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.644 [2024-07-16 01:01:34.399821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.644 [2024-07-16 01:01:34.400131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.644 [2024-07-16 01:01:34.400160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.901 [2024-07-16 01:01:34.413671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.901 [2024-07-16 01:01:34.413973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.901 [2024-07-16 01:01:34.414000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.901 [2024-07-16 01:01:34.427636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.901 [2024-07-16 01:01:34.427920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.901 [2024-07-16 01:01:34.427948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.901 [2024-07-16 01:01:34.441631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.901 [2024-07-16 01:01:34.441971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.901 [2024-07-16 01:01:34.441998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.901 [2024-07-16 01:01:34.455766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.901 [2024-07-16 01:01:34.456089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.901 [2024-07-16 01:01:34.456121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.901 [2024-07-16 01:01:34.469871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.901 [2024-07-16 01:01:34.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.901 [2024-07-16 01:01:34.470259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.901 [2024-07-16 01:01:34.483968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.901 [2024-07-16 01:01:34.484290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.901 [2024-07-16 01:01:34.484320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.498059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.498394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.498424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.512101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.512410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.512441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.526024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.526333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.526364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.540071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.540418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.540448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.554303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.554581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.554608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.568340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.568626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.568655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.582359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.582645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.582676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.596394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.596674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.596704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.610463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.610741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.610772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.624682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.624961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.624992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.638837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.639195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.639226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.902 [2024-07-16 01:01:34.652964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:24:59.902 [2024-07-16 01:01:34.653264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.902 [2024-07-16 01:01:34.653294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.667060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.667398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.667427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.681104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.681429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.681460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.695263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.695555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.695582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.709375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.709803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.709833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.723428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.723737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.723767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.737525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.737828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.737854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.751648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.751993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.752019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.765745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.766110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.779603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.779953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.779982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.793538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.793863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.793897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.807598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.807941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.807968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.821663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.822003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.822030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.161 [2024-07-16 01:01:34.835831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.161 [2024-07-16 01:01:34.836115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.161 [2024-07-16 01:01:34.836143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.162 [2024-07-16 01:01:34.849990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.162 [2024-07-16 01:01:34.850410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.162 [2024-07-16 01:01:34.850440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.162 [2024-07-16 01:01:34.864258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.162 [2024-07-16 01:01:34.864575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.162 [2024-07-16 01:01:34.864617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.162 [2024-07-16 01:01:34.878553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.162 [2024-07-16 01:01:34.878904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.162 [2024-07-16 01:01:34.878935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.162 [2024-07-16 01:01:34.892643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.162 [2024-07-16 01:01:34.892945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.162 [2024-07-16 01:01:34.892973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.162 [2024-07-16 01:01:34.906695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.162 [2024-07-16 01:01:34.906971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.162 [2024-07-16 01:01:34.907012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:34.920742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:34.921029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:34.921057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:34.934771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:34.935060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:34.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:34.948938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:34.949363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:34.949398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:34.963091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:34.963344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:34.963371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:34.977168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:34.977487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:34.977514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:34.991248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:34.991580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:34.991606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:35.005444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:35.005818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:35.005845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:35.019562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:35.019854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:35.019889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:35.033430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:35.033856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:35.033892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:35.047517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:35.047819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:35.047846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:35.061677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:35.061982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:35.062026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:35.075955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:35.076297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.422 [2024-07-16 01:01:35.076340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.422 [2024-07-16 01:01:35.090139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.422 [2024-07-16 01:01:35.090520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.423 [2024-07-16 01:01:35.090547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.423 [2024-07-16 01:01:35.104221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.423 [2024-07-16 01:01:35.104552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.423 [2024-07-16 01:01:35.104579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.423 [2024-07-16 01:01:35.118347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.423 [2024-07-16 01:01:35.118631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.423 [2024-07-16 01:01:35.118673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.423 [2024-07-16 01:01:35.132474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.423 [2024-07-16 01:01:35.132829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.423 [2024-07-16 01:01:35.132855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.423 [2024-07-16 01:01:35.146630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.423 [2024-07-16 01:01:35.146973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.423 [2024-07-16 01:01:35.147005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.423 [2024-07-16 01:01:35.160783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.423 [2024-07-16 01:01:35.161097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.423 [2024-07-16 01:01:35.161123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.423 [2024-07-16 01:01:35.174939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.423 [2024-07-16 01:01:35.175275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.423 [2024-07-16 01:01:35.175301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.188911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.189259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.189285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.203088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.203418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.203445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.217258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.217548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.217575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.231388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.231693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.231721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.245464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.245826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.245852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.259586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.259890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.259916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.273889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.274186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.274212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.287785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.288166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.288196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.301865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.302240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.302282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.316046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.316479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.316515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.330118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.330413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.330440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.344186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.344534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.344560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.358289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.358626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.358653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.372449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.372747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.372773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.386572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.386895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.386922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.400551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.400900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.400931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.414538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.414892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.414919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.684 [2024-07-16 01:01:35.428502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.684 [2024-07-16 01:01:35.428782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.684 [2024-07-16 01:01:35.428822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.442523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.442863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.442897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.456576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.456862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.456913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.470639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.470977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.471003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.484782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.485195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.485240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.498856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.499165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.499191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.512806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.513091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.513118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.526948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.527296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.527323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.540794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.541114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.541142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.554853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.555202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.555229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.568978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.569232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.569273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.583182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.583592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.583634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.597368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.597646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.597686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.611496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.611797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.611823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.625565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.625840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.625867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.639697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.640024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.640051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.653847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.654259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.944 [2024-07-16 01:01:35.654290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.944 [2024-07-16 01:01:35.667636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.944 [2024-07-16 01:01:35.667951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.945 [2024-07-16 01:01:35.667978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.945 [2024-07-16 01:01:35.681609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.945 [2024-07-16 01:01:35.681960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.945 [2024-07-16 01:01:35.681988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.945 [2024-07-16 01:01:35.695606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:00.945 [2024-07-16 01:01:35.695962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.945 [2024-07-16 01:01:35.695990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.203 [2024-07-16 01:01:35.709555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:01.203 [2024-07-16 01:01:35.709926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.203 [2024-07-16 01:01:35.709956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.203 [2024-07-16 01:01:35.723556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:01.203 [2024-07-16 01:01:35.723897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.203 [2024-07-16 01:01:35.723940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.203 [2024-07-16 01:01:35.737654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21356a0) with pdu=0x2000190fdeb0 00:25:01.203 [2024-07-16 01:01:35.738013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.203 [2024-07-16 01:01:35.738055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.203 00:25:01.203 Latency(us) 00:25:01.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.203 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:01.203 nvme0n1 : 2.01 18065.82 70.57 0.00 0.00 7068.03 6456.51 14660.65 00:25:01.203 =================================================================================================================== 00:25:01.203 Total : 18065.82 70.57 0.00 0.00 7068.03 6456.51 14660.65 00:25:01.203 0 00:25:01.203 01:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:01.203 01:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:01.203 01:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:01.203 01:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:01.203 | .driver_specific 00:25:01.203 | .nvme_error 00:25:01.203 | .status_code 00:25:01.203 | .command_transient_transport_error' 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2742823 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2742823 ']' 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2742823 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2742823 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2742823' 00:25:01.462 killing process with pid 2742823 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2742823 00:25:01.462 Received shutdown signal, test time was about 2.000000 seconds 00:25:01.462 00:25:01.462 Latency(us) 00:25:01.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.462 =================================================================================================================== 00:25:01.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.462 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2742823 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2743234 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2743234 /var/tmp/bperf.sock 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2743234 ']' 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:01.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.720 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.720 [2024-07-16 01:01:36.397384] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:01.720 [2024-07-16 01:01:36.397461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743234 ] 00:25:01.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:01.720 Zero copy mechanism will not be used. 00:25:01.720 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.720 [2024-07-16 01:01:36.457132] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.977 [2024-07-16 01:01:36.567110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.977 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.977 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:01.977 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:01.977 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:02.235 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:02.235 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.235 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.235 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.235 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.235 01:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.802 nvme0n1 00:25:02.803 01:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:02.803 01:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.803 01:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.803 01:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.803 01:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:02.803 01:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:02.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:02.803 Zero copy mechanism will not be used. 00:25:02.803 Running I/O for 2 seconds... 00:25:02.803 [2024-07-16 01:01:37.521407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:02.803 [2024-07-16 01:01:37.521832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.803 [2024-07-16 01:01:37.521871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.803 [2024-07-16 01:01:37.538598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:02.803 [2024-07-16 01:01:37.539069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.803 [2024-07-16 01:01:37.539098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.803 [2024-07-16 01:01:37.558705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:02.803 [2024-07-16 01:01:37.559263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.803 [2024-07-16 01:01:37.559297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.062 [2024-07-16 01:01:37.577622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.062 [2024-07-16 01:01:37.578035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.062 [2024-07-16 01:01:37.578080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.062 [2024-07-16 01:01:37.595338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.062 [2024-07-16 01:01:37.595731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.062 [2024-07-16 01:01:37.595760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.062 [2024-07-16 01:01:37.612487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.062 [2024-07-16 01:01:37.612978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.062 [2024-07-16 01:01:37.613022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.062 [2024-07-16 01:01:37.629445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.629812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.629840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.648427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.648965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.649008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.666896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.667276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.667322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.683275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.683660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.683687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.699219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.699611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.699656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.718773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.719144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.719172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.737721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.738288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.738335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.757355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.757834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.757882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.777232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.777623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.777653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.795359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.795758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.795787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.063 [2024-07-16 01:01:37.811114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.063 [2024-07-16 01:01:37.811496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.063 [2024-07-16 01:01:37.811525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.829971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.830345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.830392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.847928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.848454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.848497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.866564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.867037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.867066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.884892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.885316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.885344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.902218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.902585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.902615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.918988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.919412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.919449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.936157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.936738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.936782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.954248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.954635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.972628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.973081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.973126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:37.992567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:37.993083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:37.993129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:38.011922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:38.012389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:38.012431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:38.029453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:38.029981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:38.030010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:38.049359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:38.049775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:38.049805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.322 [2024-07-16 01:01:38.068899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.322 [2024-07-16 01:01:38.069280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-16 01:01:38.069322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.086988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.087439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.087469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.105496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.105915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.105959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.124207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.124577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.124606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.142329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.142869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.142917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.160349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.160739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.160767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.178579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.178964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.178993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.195507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.195873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.195908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.214211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.214607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.214637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.232148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.232516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.232545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.250586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.251040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.251083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.266195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.266612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.266640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.282800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.283264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.283308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.301901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.302320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.302351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.320599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.321020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.321050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.338886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.339246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.339289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.601 [2024-07-16 01:01:38.356888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.601 [2024-07-16 01:01:38.357253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.601 [2024-07-16 01:01:38.357298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.373926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.374296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.374340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.391471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.391993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.392031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.409015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.409569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.409612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.428183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.428547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.428576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.446081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.446579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.446623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.464917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.465300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.465347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.481564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.482018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.482061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.498360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.498767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.498796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.515966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.516334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.516378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.534603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.534991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.535019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.552486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.553018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.553050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.572357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.572722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.572753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.590494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.590947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.860 [2024-07-16 01:01:38.590991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.860 [2024-07-16 01:01:38.608713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:03.860 [2024-07-16 01:01:38.609149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.861 [2024-07-16 01:01:38.609194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.628156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.628722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.645501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.645866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.645902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.662842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.663229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.663258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.678490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.678950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.678994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.696496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.697027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.697065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.715199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.715700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.715727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.733465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.733847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.733874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.752087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.752549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.752577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.771228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.119 [2024-07-16 01:01:38.771608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.119 [2024-07-16 01:01:38.771654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.119 [2024-07-16 01:01:38.789113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.120 [2024-07-16 01:01:38.789673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.120 [2024-07-16 01:01:38.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.120 [2024-07-16 01:01:38.806375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.120 [2024-07-16 01:01:38.806752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.120 [2024-07-16 01:01:38.806798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.120 [2024-07-16 01:01:38.825674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.120 [2024-07-16 01:01:38.826284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.120 [2024-07-16 01:01:38.826329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.120 [2024-07-16 01:01:38.844182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.120 [2024-07-16 01:01:38.844690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.120 [2024-07-16 01:01:38.844731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.120 [2024-07-16 01:01:38.862737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.120 [2024-07-16 01:01:38.863205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.120 [2024-07-16 01:01:38.863233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:38.879578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:38.880107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:38.880150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:38.897965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:38.898416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:38.898456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:38.917076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:38.917649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:38.917675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:38.935423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:38.935871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:38.935925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:38.954273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:38.954706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:38.954747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:38.972501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:38.972895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:38.972940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:38.989321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:38.989735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:38.989778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.007152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.007649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.007692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.026479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.026928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.026976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.044513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.044926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.044954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.062632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.063086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.063114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.081131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.081663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.081707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.097579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.098006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.098048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.114633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.115036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.115079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.377 [2024-07-16 01:01:39.132757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.377 [2024-07-16 01:01:39.133258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.377 [2024-07-16 01:01:39.133286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.151951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.152466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.152510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.171235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.171622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.171674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.190478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.190856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.190912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.209346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.209766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.209793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.226808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.227224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.227268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.244614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.245034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.245063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.261586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.261998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.279283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.279717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.279745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.299348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.299860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.299897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.319053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.319488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.319518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.337406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.337937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.337967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.354408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.354829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.354872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.373364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.373837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.373893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.636 [2024-07-16 01:01:39.392344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.636 [2024-07-16 01:01:39.392902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.636 [2024-07-16 01:01:39.392949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.893 [2024-07-16 01:01:39.411620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.893 [2024-07-16 01:01:39.412074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.893 [2024-07-16 01:01:39.412119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.893 [2024-07-16 01:01:39.431131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.893 [2024-07-16 01:01:39.431514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.893 [2024-07-16 01:01:39.431541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.893 [2024-07-16 01:01:39.449484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.893 [2024-07-16 01:01:39.450063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.893 [2024-07-16 01:01:39.450092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.893 [2024-07-16 01:01:39.468740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.893 [2024-07-16 01:01:39.469322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.893 [2024-07-16 01:01:39.469368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.893 [2024-07-16 01:01:39.487433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.893 [2024-07-16 01:01:39.488016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.893 [2024-07-16 01:01:39.488059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.893 [2024-07-16 01:01:39.507037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6ac90) with pdu=0x2000190fef90 00:25:04.893 [2024-07-16 01:01:39.507436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.893 [2024-07-16 01:01:39.507478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.893 00:25:04.893 Latency(us) 00:25:04.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.893 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:04.893 nvme0n1 : 2.01 1702.14 212.77 0.00 0.00 9372.03 5922.51 20486.07 00:25:04.893 =================================================================================================================== 00:25:04.893 Total : 1702.14 212.77 0.00 0.00 9372.03 5922.51 20486.07 00:25:04.893 0 00:25:04.894 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:04.894 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:04.894 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:04.894 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:04.894 | .driver_specific 00:25:04.894 | .nvme_error 00:25:04.894 | .status_code 00:25:04.894 | .command_transient_transport_error' 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2743234 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2743234 ']' 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2743234 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2743234 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2743234' 00:25:05.152 killing process with pid 2743234 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2743234 00:25:05.152 Received shutdown signal, test time was about 2.000000 seconds 00:25:05.152 00:25:05.152 Latency(us) 00:25:05.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.152 =================================================================================================================== 00:25:05.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.152 01:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2743234 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2741733 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2741733 ']' 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2741733 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2741733 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2741733' 00:25:05.410 killing process with pid 2741733 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2741733 00:25:05.410 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2741733 00:25:05.669 00:25:05.669 real 0m16.195s 00:25:05.669 user 0m32.880s 00:25:05.669 sys 0m3.956s 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.669 ************************************ 00:25:05.669 END TEST nvmf_digest_error 00:25:05.669 ************************************ 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.669 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.669 rmmod nvme_tcp 00:25:05.669 rmmod nvme_fabrics 00:25:05.929 rmmod nvme_keyring 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2741733 ']' 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2741733 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2741733 ']' 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2741733 00:25:05.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2741733) - No such process 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2741733 is not found' 00:25:05.929 Process with pid 2741733 is not found 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.929 01:01:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.833 01:01:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:07.833 00:25:07.833 real 0m37.068s 00:25:07.833 user 1m7.075s 00:25:07.833 sys 0m9.302s 00:25:07.833 01:01:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:07.833 01:01:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:07.833 ************************************ 00:25:07.833 END TEST nvmf_digest 00:25:07.833 ************************************ 00:25:07.833 01:01:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:07.833 01:01:42 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:07.833 01:01:42 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:07.833 01:01:42 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:07.834 01:01:42 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:07.834 01:01:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:07.834 01:01:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.834 01:01:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:07.834 ************************************ 00:25:07.834 START TEST nvmf_bdevperf 00:25:07.834 ************************************ 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:07.834 * Looking for test storage... 00:25:07.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.834 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:08.092 01:01:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:10.000 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.001 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.001 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.001 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:25:10.001 00:25:10.001 --- 10.0.0.2 ping statistics --- 00:25:10.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.001 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:25:10.001 00:25:10.001 --- 10.0.0.1 ping statistics --- 00:25:10.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.001 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2745586 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2745586 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2745586 ']' 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.001 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.001 [2024-07-16 01:01:44.677067] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:10.002 [2024-07-16 01:01:44.677141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.002 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.002 [2024-07-16 01:01:44.747724] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:10.260 [2024-07-16 01:01:44.872437] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.260 [2024-07-16 01:01:44.872494] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.260 [2024-07-16 01:01:44.872509] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.260 [2024-07-16 01:01:44.872522] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.260 [2024-07-16 01:01:44.872534] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.260 [2024-07-16 01:01:44.874907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.260 [2024-07-16 01:01:44.874967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.260 [2024-07-16 01:01:44.874972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.260 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.260 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:10.260 01:01:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:10.260 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.260 01:01:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.520 [2024-07-16 01:01:45.025681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.520 Malloc0 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.520 [2024-07-16 01:01:45.084673] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:10.520 01:01:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:10.521 { 00:25:10.521 "params": { 00:25:10.521 "name": "Nvme$subsystem", 00:25:10.521 "trtype": "$TEST_TRANSPORT", 00:25:10.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.521 "adrfam": "ipv4", 00:25:10.521 "trsvcid": "$NVMF_PORT", 00:25:10.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.521 "hdgst": ${hdgst:-false}, 00:25:10.521 "ddgst": ${ddgst:-false} 00:25:10.521 }, 00:25:10.521 "method": "bdev_nvme_attach_controller" 00:25:10.521 } 00:25:10.521 EOF 00:25:10.521 )") 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:10.521 01:01:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:10.521 "params": { 00:25:10.521 "name": "Nvme1", 00:25:10.521 "trtype": "tcp", 00:25:10.521 "traddr": "10.0.0.2", 00:25:10.521 "adrfam": "ipv4", 00:25:10.521 "trsvcid": "4420", 00:25:10.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.521 "hdgst": false, 00:25:10.521 "ddgst": false 00:25:10.521 }, 00:25:10.521 "method": "bdev_nvme_attach_controller" 00:25:10.521 }' 00:25:10.521 [2024-07-16 01:01:45.133724] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:10.521 [2024-07-16 01:01:45.133791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745726 ] 00:25:10.521 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.521 [2024-07-16 01:01:45.192204] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.781 [2024-07-16 01:01:45.305100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.781 Running I/O for 1 seconds... 00:25:12.157 00:25:12.157 Latency(us) 00:25:12.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.157 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:12.157 Verification LBA range: start 0x0 length 0x4000 00:25:12.157 Nvme1n1 : 1.01 8644.63 33.77 0.00 0.00 14745.51 2973.39 15437.37 00:25:12.157 =================================================================================================================== 00:25:12.157 Total : 8644.63 33.77 0.00 0.00 14745.51 2973.39 15437.37 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2745874 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:12.157 { 00:25:12.157 "params": { 00:25:12.157 "name": "Nvme$subsystem", 00:25:12.157 "trtype": "$TEST_TRANSPORT", 00:25:12.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.157 "adrfam": "ipv4", 00:25:12.157 "trsvcid": "$NVMF_PORT", 00:25:12.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.157 "hdgst": ${hdgst:-false}, 00:25:12.157 "ddgst": ${ddgst:-false} 00:25:12.157 }, 00:25:12.157 "method": "bdev_nvme_attach_controller" 00:25:12.157 } 00:25:12.157 EOF 00:25:12.157 )") 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:12.157 01:01:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:12.157 "params": { 00:25:12.157 "name": "Nvme1", 00:25:12.157 "trtype": "tcp", 00:25:12.157 "traddr": "10.0.0.2", 00:25:12.157 "adrfam": "ipv4", 00:25:12.157 "trsvcid": "4420", 00:25:12.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.157 "hdgst": false, 00:25:12.157 "ddgst": false 00:25:12.157 }, 00:25:12.157 "method": "bdev_nvme_attach_controller" 00:25:12.157 }' 00:25:12.157 [2024-07-16 01:01:46.845641] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:12.157 [2024-07-16 01:01:46.845716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745874 ] 00:25:12.157 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.157 [2024-07-16 01:01:46.904242] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.417 [2024-07-16 01:01:47.013985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.677 Running I/O for 15 seconds... 00:25:15.209 01:01:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2745586 00:25:15.209 01:01:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:15.209 [2024-07-16 01:01:49.813015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.209 [2024-07-16 01:01:49.813943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.813973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.813988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.209 [2024-07-16 01:01:49.814001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.209 [2024-07-16 01:01:49.814016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.210 [2024-07-16 01:01:49.814260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.814977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.814992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.210 [2024-07-16 01:01:49.815426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.210 [2024-07-16 01:01:49.815441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.815971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.815986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.211 [2024-07-16 01:01:49.816087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.211 [2024-07-16 01:01:49.816116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.211 [2024-07-16 01:01:49.816149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.211 [2024-07-16 01:01:49.816197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.211 [2024-07-16 01:01:49.816229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.211 [2024-07-16 01:01:49.816261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.211 [2024-07-16 01:01:49.816547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.211 [2024-07-16 01:01:49.816827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.211 [2024-07-16 01:01:49.816842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.816859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.816874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.816898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.816929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.816945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.816958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.816974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.212 [2024-07-16 01:01:49.817318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320c80 is same with the state(5) to be set 00:25:15.212 [2024-07-16 01:01:49.817352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.212 [2024-07-16 01:01:49.817365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.212 [2024-07-16 01:01:49.817378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49896 len:8 PRP1 0x0 PRP2 0x0 00:25:15.212 [2024-07-16 01:01:49.817392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817462] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1320c80 was disconnected and freed. reset controller. 00:25:15.212 [2024-07-16 01:01:49.817536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.212 [2024-07-16 01:01:49.817559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.212 [2024-07-16 01:01:49.817590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.212 [2024-07-16 01:01:49.817626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.212 [2024-07-16 01:01:49.817655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.212 [2024-07-16 01:01:49.817668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.212 [2024-07-16 01:01:49.821466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.212 [2024-07-16 01:01:49.821506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.212 [2024-07-16 01:01:49.822202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.212 [2024-07-16 01:01:49.822245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.212 [2024-07-16 01:01:49.822263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.212 [2024-07-16 01:01:49.822502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.212 [2024-07-16 01:01:49.822746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.212 [2024-07-16 01:01:49.822769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.212 [2024-07-16 01:01:49.822787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.212 [2024-07-16 01:01:49.826363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.212 [2024-07-16 01:01:49.835648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.212 [2024-07-16 01:01:49.836109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.212 [2024-07-16 01:01:49.836140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.212 [2024-07-16 01:01:49.836159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.212 [2024-07-16 01:01:49.836396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.212 [2024-07-16 01:01:49.836638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.212 [2024-07-16 01:01:49.836661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.212 [2024-07-16 01:01:49.836676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.212 [2024-07-16 01:01:49.840260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.212 [2024-07-16 01:01:49.849527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.212 [2024-07-16 01:01:49.849997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.212 [2024-07-16 01:01:49.850026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.212 [2024-07-16 01:01:49.850042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.212 [2024-07-16 01:01:49.850301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.212 [2024-07-16 01:01:49.850544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.212 [2024-07-16 01:01:49.850566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.212 [2024-07-16 01:01:49.850581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.212 [2024-07-16 01:01:49.854166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.212 [2024-07-16 01:01:49.863437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.212 [2024-07-16 01:01:49.863900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.212 [2024-07-16 01:01:49.863931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.212 [2024-07-16 01:01:49.863948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.212 [2024-07-16 01:01:49.864187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.212 [2024-07-16 01:01:49.864428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.212 [2024-07-16 01:01:49.864452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.212 [2024-07-16 01:01:49.864466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.212 [2024-07-16 01:01:49.868045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.212 [2024-07-16 01:01:49.877314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.212 [2024-07-16 01:01:49.877778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.212 [2024-07-16 01:01:49.877809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.212 [2024-07-16 01:01:49.877826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.212 [2024-07-16 01:01:49.878075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.212 [2024-07-16 01:01:49.878317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.212 [2024-07-16 01:01:49.878340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.212 [2024-07-16 01:01:49.878355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.212 [2024-07-16 01:01:49.881929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.212 [2024-07-16 01:01:49.891188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.212 [2024-07-16 01:01:49.891662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.212 [2024-07-16 01:01:49.891692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.212 [2024-07-16 01:01:49.891715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.213 [2024-07-16 01:01:49.891964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.213 [2024-07-16 01:01:49.892206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.213 [2024-07-16 01:01:49.892229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.213 [2024-07-16 01:01:49.892245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.213 [2024-07-16 01:01:49.895812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.213 [2024-07-16 01:01:49.905085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.213 [2024-07-16 01:01:49.905558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.213 [2024-07-16 01:01:49.905589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.213 [2024-07-16 01:01:49.905607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.213 [2024-07-16 01:01:49.905844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.213 [2024-07-16 01:01:49.906096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.213 [2024-07-16 01:01:49.906120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.213 [2024-07-16 01:01:49.906135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.213 [2024-07-16 01:01:49.909700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.213 [2024-07-16 01:01:49.918981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.213 [2024-07-16 01:01:49.919411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.213 [2024-07-16 01:01:49.919442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.213 [2024-07-16 01:01:49.919459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.213 [2024-07-16 01:01:49.919696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.213 [2024-07-16 01:01:49.919949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.213 [2024-07-16 01:01:49.919974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.213 [2024-07-16 01:01:49.919989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.213 [2024-07-16 01:01:49.923552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.213 [2024-07-16 01:01:49.932812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.213 [2024-07-16 01:01:49.933277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.213 [2024-07-16 01:01:49.933308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.213 [2024-07-16 01:01:49.933326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.213 [2024-07-16 01:01:49.933563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.213 [2024-07-16 01:01:49.933805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.213 [2024-07-16 01:01:49.933834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.213 [2024-07-16 01:01:49.933849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.213 [2024-07-16 01:01:49.937427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.213 [2024-07-16 01:01:49.946690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.213 [2024-07-16 01:01:49.947170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.213 [2024-07-16 01:01:49.947201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.213 [2024-07-16 01:01:49.947218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.213 [2024-07-16 01:01:49.947456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.213 [2024-07-16 01:01:49.947696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.213 [2024-07-16 01:01:49.947720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.213 [2024-07-16 01:01:49.947735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.213 [2024-07-16 01:01:49.951326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.213 [2024-07-16 01:01:49.960607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.213 [2024-07-16 01:01:49.961041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.213 [2024-07-16 01:01:49.961073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.213 [2024-07-16 01:01:49.961091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.213 [2024-07-16 01:01:49.961329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.213 [2024-07-16 01:01:49.961577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.213 [2024-07-16 01:01:49.961600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.213 [2024-07-16 01:01:49.961615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.472 [2024-07-16 01:01:49.965200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.472 [2024-07-16 01:01:49.974485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.472 [2024-07-16 01:01:49.974946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.472 [2024-07-16 01:01:49.974978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.472 [2024-07-16 01:01:49.974995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.472 [2024-07-16 01:01:49.975233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.472 [2024-07-16 01:01:49.975474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.472 [2024-07-16 01:01:49.975496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.472 [2024-07-16 01:01:49.975511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.472 [2024-07-16 01:01:49.979092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.472 [2024-07-16 01:01:49.988369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.472 [2024-07-16 01:01:49.988811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.472 [2024-07-16 01:01:49.988842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.472 [2024-07-16 01:01:49.988868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.472 [2024-07-16 01:01:49.989116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.472 [2024-07-16 01:01:49.989358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.472 [2024-07-16 01:01:49.989381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.472 [2024-07-16 01:01:49.989398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:49.993090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.002473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.002926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.002959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.002978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.003221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.003485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.003510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.003526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.007209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.016674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.017157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.017191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.017210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.017448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.017690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.017714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.017729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.021305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.030599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.031073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.031105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.031131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.031371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.031614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.031637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.031653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.035232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.044524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.045007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.045039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.045058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.045296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.045539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.045563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.045578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.049158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.058441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.058892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.058925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.058943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.059182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.059424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.059447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.059462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.063038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.072588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.073069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.073105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.073126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.073372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.073622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.073648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.073669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.077373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.086633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.087088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.087120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.087139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.087377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.087619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.087642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.087657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.091238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.100660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.101142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.101174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.101192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.101430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.101671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.101694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.101709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.105288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.114566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.115040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.115072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.115090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.115327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.115569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.115592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.115606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.119182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.128450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.128897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.128928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.128946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.129183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.129424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.129447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.129462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.473 [2024-07-16 01:01:50.133041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.473 [2024-07-16 01:01:50.142308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.473 [2024-07-16 01:01:50.142747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.473 [2024-07-16 01:01:50.142778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.473 [2024-07-16 01:01:50.142795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.473 [2024-07-16 01:01:50.143044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.473 [2024-07-16 01:01:50.143286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.473 [2024-07-16 01:01:50.143310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.473 [2024-07-16 01:01:50.143324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.474 [2024-07-16 01:01:50.146898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.474 [2024-07-16 01:01:50.156168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.474 [2024-07-16 01:01:50.156659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.474 [2024-07-16 01:01:50.156690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.474 [2024-07-16 01:01:50.156707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.474 [2024-07-16 01:01:50.156957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.474 [2024-07-16 01:01:50.157199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.474 [2024-07-16 01:01:50.157222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.474 [2024-07-16 01:01:50.157237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.474 [2024-07-16 01:01:50.160803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.474 [2024-07-16 01:01:50.170101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.474 [2024-07-16 01:01:50.170513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.474 [2024-07-16 01:01:50.170544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.474 [2024-07-16 01:01:50.170561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.474 [2024-07-16 01:01:50.170804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.474 [2024-07-16 01:01:50.171059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.474 [2024-07-16 01:01:50.171084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.474 [2024-07-16 01:01:50.171098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.474 [2024-07-16 01:01:50.174664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.474 [2024-07-16 01:01:50.183939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.474 [2024-07-16 01:01:50.184370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.474 [2024-07-16 01:01:50.184401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.474 [2024-07-16 01:01:50.184418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.474 [2024-07-16 01:01:50.184656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.474 [2024-07-16 01:01:50.184908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.474 [2024-07-16 01:01:50.184932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.474 [2024-07-16 01:01:50.184947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.474 [2024-07-16 01:01:50.188514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.474 [2024-07-16 01:01:50.197782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.474 [2024-07-16 01:01:50.198227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.474 [2024-07-16 01:01:50.198257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.474 [2024-07-16 01:01:50.198275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.474 [2024-07-16 01:01:50.198513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.474 [2024-07-16 01:01:50.198754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.474 [2024-07-16 01:01:50.198777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.474 [2024-07-16 01:01:50.198792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.474 [2024-07-16 01:01:50.202369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.474 [2024-07-16 01:01:50.211637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.474 [2024-07-16 01:01:50.212114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.474 [2024-07-16 01:01:50.212145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.474 [2024-07-16 01:01:50.212162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.474 [2024-07-16 01:01:50.212400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.474 [2024-07-16 01:01:50.212641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.474 [2024-07-16 01:01:50.212664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.474 [2024-07-16 01:01:50.212684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.474 [2024-07-16 01:01:50.216262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.474 [2024-07-16 01:01:50.225531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.474 [2024-07-16 01:01:50.226012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.474 [2024-07-16 01:01:50.226043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.474 [2024-07-16 01:01:50.226060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.474 [2024-07-16 01:01:50.226299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.474 [2024-07-16 01:01:50.226540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.474 [2024-07-16 01:01:50.226563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.474 [2024-07-16 01:01:50.226577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.733 [2024-07-16 01:01:50.230156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.733 [2024-07-16 01:01:50.239435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.733 [2024-07-16 01:01:50.239902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.733 [2024-07-16 01:01:50.239934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.733 [2024-07-16 01:01:50.239952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.733 [2024-07-16 01:01:50.240190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.733 [2024-07-16 01:01:50.240431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.733 [2024-07-16 01:01:50.240455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.733 [2024-07-16 01:01:50.240469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.733 [2024-07-16 01:01:50.244047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.733 [2024-07-16 01:01:50.253312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.733 [2024-07-16 01:01:50.253751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.733 [2024-07-16 01:01:50.253782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.733 [2024-07-16 01:01:50.253799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.733 [2024-07-16 01:01:50.254051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.733 [2024-07-16 01:01:50.254294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.733 [2024-07-16 01:01:50.254317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.733 [2024-07-16 01:01:50.254331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.733 [2024-07-16 01:01:50.257908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.733 [2024-07-16 01:01:50.267190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.733 [2024-07-16 01:01:50.267646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.733 [2024-07-16 01:01:50.267681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.733 [2024-07-16 01:01:50.267700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.733 [2024-07-16 01:01:50.267949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.733 [2024-07-16 01:01:50.268191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.733 [2024-07-16 01:01:50.268214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.733 [2024-07-16 01:01:50.268229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.733 [2024-07-16 01:01:50.271793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.733 [2024-07-16 01:01:50.281075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.733 [2024-07-16 01:01:50.281507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.733 [2024-07-16 01:01:50.281539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.733 [2024-07-16 01:01:50.281556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.733 [2024-07-16 01:01:50.281794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.282048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.282072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.282087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.285654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.294926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.295371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.295402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.295420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.295657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.295910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.295934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.295949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.299518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.308791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.309262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.309293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.309310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.309547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.309795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.309818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.309833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.313408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.322684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.323124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.323155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.323172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.323410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.323650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.323673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.323688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.327266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.336560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.336995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.337026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.337044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.337289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.337531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.337553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.337568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.341142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.350433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.350871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.350909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.350927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.351165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.351407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.351430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.351445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.355056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.364333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.364820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.364852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.364869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.365119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.365361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.365384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.365399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.368981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.378261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.378723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.378753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.378771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.379021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.379263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.379286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.379301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.382881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.392163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.392593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.392623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.392641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.392889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.393131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.393154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.393169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.396739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.406017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.406484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.406516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.406539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.406778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.407032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.407057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.407072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.410641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.419955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.420417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.420448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.420465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.420704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.420957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.734 [2024-07-16 01:01:50.420982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.734 [2024-07-16 01:01:50.420996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.734 [2024-07-16 01:01:50.424567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.734 [2024-07-16 01:01:50.433864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.734 [2024-07-16 01:01:50.434343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.734 [2024-07-16 01:01:50.434374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.734 [2024-07-16 01:01:50.434391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.734 [2024-07-16 01:01:50.434628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.734 [2024-07-16 01:01:50.434869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.735 [2024-07-16 01:01:50.434905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.735 [2024-07-16 01:01:50.434921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.735 [2024-07-16 01:01:50.438495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.735 [2024-07-16 01:01:50.447786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.735 [2024-07-16 01:01:50.448254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.735 [2024-07-16 01:01:50.448285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.735 [2024-07-16 01:01:50.448303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.735 [2024-07-16 01:01:50.448542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.735 [2024-07-16 01:01:50.448785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.735 [2024-07-16 01:01:50.448813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.735 [2024-07-16 01:01:50.448828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.735 [2024-07-16 01:01:50.452428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.735 [2024-07-16 01:01:50.461727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.735 [2024-07-16 01:01:50.462152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.735 [2024-07-16 01:01:50.462183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.735 [2024-07-16 01:01:50.462201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.735 [2024-07-16 01:01:50.462438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.735 [2024-07-16 01:01:50.462680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.735 [2024-07-16 01:01:50.462703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.735 [2024-07-16 01:01:50.462717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.735 [2024-07-16 01:01:50.466303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.735 [2024-07-16 01:01:50.475597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.735 [2024-07-16 01:01:50.476065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.735 [2024-07-16 01:01:50.476096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.735 [2024-07-16 01:01:50.476113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.735 [2024-07-16 01:01:50.476350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.735 [2024-07-16 01:01:50.476592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.735 [2024-07-16 01:01:50.476615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.735 [2024-07-16 01:01:50.476630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.735 [2024-07-16 01:01:50.480216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.735 [2024-07-16 01:01:50.489531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.995 [2024-07-16 01:01:50.489997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.995 [2024-07-16 01:01:50.490029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.995 [2024-07-16 01:01:50.490047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.995 [2024-07-16 01:01:50.490285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.995 [2024-07-16 01:01:50.490526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.995 [2024-07-16 01:01:50.490550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.995 [2024-07-16 01:01:50.490576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.995 [2024-07-16 01:01:50.494168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.995 [2024-07-16 01:01:50.503466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.995 [2024-07-16 01:01:50.503930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.995 [2024-07-16 01:01:50.503961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.995 [2024-07-16 01:01:50.503978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.995 [2024-07-16 01:01:50.504216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.995 [2024-07-16 01:01:50.504457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.995 [2024-07-16 01:01:50.504481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.995 [2024-07-16 01:01:50.504496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.995 [2024-07-16 01:01:50.508076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.995 [2024-07-16 01:01:50.517373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.995 [2024-07-16 01:01:50.517970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.995 [2024-07-16 01:01:50.518001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.995 [2024-07-16 01:01:50.518018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.995 [2024-07-16 01:01:50.518255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.995 [2024-07-16 01:01:50.518497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.995 [2024-07-16 01:01:50.518520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.995 [2024-07-16 01:01:50.518535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.995 [2024-07-16 01:01:50.522120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.995 [2024-07-16 01:01:50.531424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.995 [2024-07-16 01:01:50.531856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.995 [2024-07-16 01:01:50.531897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.995 [2024-07-16 01:01:50.531916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.995 [2024-07-16 01:01:50.532154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.532396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.532419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.532434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.536014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.545314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.545785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.545816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.545833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.546088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.546331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.546355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.546370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.549959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.559261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.559724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.559755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.559772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.560020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.560262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.560285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.560300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.563870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.573178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.573697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.573728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.573746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.573994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.574237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.574260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.574276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.577846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.587155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.587591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.587622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.587639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.587886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.588128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.588152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.588173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.591746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.601042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.601678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.601729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.601746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.601994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.602237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.602260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.602275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.605853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.614936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.615367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.615397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.615415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.615652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.615906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.615930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.615945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.619515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.628796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.629232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.629263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.629280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.629518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.629759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.629782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.629797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.633377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.642656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.643111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.643142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.643159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.643397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.643638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.643661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.643676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.647270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.656552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.657009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.657041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.657058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.657296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.657538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.657561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.657575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.661157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.670435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.670890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.670921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.670938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.996 [2024-07-16 01:01:50.671176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.996 [2024-07-16 01:01:50.671418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.996 [2024-07-16 01:01:50.671441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.996 [2024-07-16 01:01:50.671455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.996 [2024-07-16 01:01:50.675038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.996 [2024-07-16 01:01:50.684329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.996 [2024-07-16 01:01:50.684799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.996 [2024-07-16 01:01:50.684829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.996 [2024-07-16 01:01:50.684847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.997 [2024-07-16 01:01:50.685101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.997 [2024-07-16 01:01:50.685343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.997 [2024-07-16 01:01:50.685367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.997 [2024-07-16 01:01:50.685382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.997 [2024-07-16 01:01:50.688960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.997 [2024-07-16 01:01:50.698232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.997 [2024-07-16 01:01:50.698665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.997 [2024-07-16 01:01:50.698696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.997 [2024-07-16 01:01:50.698713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.997 [2024-07-16 01:01:50.698963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.997 [2024-07-16 01:01:50.699205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.997 [2024-07-16 01:01:50.699227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.997 [2024-07-16 01:01:50.699242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.997 [2024-07-16 01:01:50.702812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.997 [2024-07-16 01:01:50.712097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.997 [2024-07-16 01:01:50.712551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.997 [2024-07-16 01:01:50.712581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.997 [2024-07-16 01:01:50.712599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.997 [2024-07-16 01:01:50.712836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.997 [2024-07-16 01:01:50.713089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.997 [2024-07-16 01:01:50.713113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.997 [2024-07-16 01:01:50.713127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.997 [2024-07-16 01:01:50.716697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.997 [2024-07-16 01:01:50.725981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.997 [2024-07-16 01:01:50.726439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.997 [2024-07-16 01:01:50.726470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.997 [2024-07-16 01:01:50.726487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.997 [2024-07-16 01:01:50.726724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.997 [2024-07-16 01:01:50.726980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.997 [2024-07-16 01:01:50.727004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.997 [2024-07-16 01:01:50.727025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.997 [2024-07-16 01:01:50.730598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.997 [2024-07-16 01:01:50.739873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.997 [2024-07-16 01:01:50.740347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.997 [2024-07-16 01:01:50.740378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:15.997 [2024-07-16 01:01:50.740395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:15.997 [2024-07-16 01:01:50.740632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:15.997 [2024-07-16 01:01:50.740874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.997 [2024-07-16 01:01:50.740909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.997 [2024-07-16 01:01:50.740924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.997 [2024-07-16 01:01:50.744494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.258 [2024-07-16 01:01:50.753780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.258 [2024-07-16 01:01:50.754262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.258 [2024-07-16 01:01:50.754293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.258 [2024-07-16 01:01:50.754310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.258 [2024-07-16 01:01:50.754548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.258 [2024-07-16 01:01:50.754789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.258 [2024-07-16 01:01:50.754812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.258 [2024-07-16 01:01:50.754827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.258 [2024-07-16 01:01:50.758416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.258 [2024-07-16 01:01:50.767690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.258 [2024-07-16 01:01:50.768152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.258 [2024-07-16 01:01:50.768183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.258 [2024-07-16 01:01:50.768201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.258 [2024-07-16 01:01:50.768438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.258 [2024-07-16 01:01:50.768680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.258 [2024-07-16 01:01:50.768703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.258 [2024-07-16 01:01:50.768717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.258 [2024-07-16 01:01:50.772299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.258 [2024-07-16 01:01:50.781574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.258 [2024-07-16 01:01:50.782039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.258 [2024-07-16 01:01:50.782076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.258 [2024-07-16 01:01:50.782094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.258 [2024-07-16 01:01:50.782332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.258 [2024-07-16 01:01:50.782574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.258 [2024-07-16 01:01:50.782597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.258 [2024-07-16 01:01:50.782612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.258 [2024-07-16 01:01:50.786195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.258 [2024-07-16 01:01:50.795473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.258 [2024-07-16 01:01:50.796008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.258 [2024-07-16 01:01:50.796039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.258 [2024-07-16 01:01:50.796056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.258 [2024-07-16 01:01:50.796294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.796536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.796559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.796573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.800155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.809432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.809887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.809918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.809935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.810173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.810415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.810438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.810453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.814037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.823316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.823748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.823778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.823796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.824043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.824291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.824315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.824329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.827912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.837191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.837633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.837663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.837680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.837931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.838173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.838196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.838210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.842015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.851092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.851526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.851557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.851574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.851812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.852064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.852089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.852104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.855682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.864985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.865464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.865495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.865512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.865750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.866003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.866028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.866043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.869620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.878918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.879380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.879410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.879428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.879665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.879919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.879943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.879958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.883528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.892816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.893258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.893290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.893308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.893546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.893787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.893810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.893825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.897405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.906683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.907133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.907164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.907181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.907419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.907661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.907683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.907698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.911280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.920562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.921019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.921050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.921073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.921312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.921553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.921576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.921592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.925174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.934447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.934911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.934941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.934958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.935195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.259 [2024-07-16 01:01:50.935437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.259 [2024-07-16 01:01:50.935460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.259 [2024-07-16 01:01:50.935475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.259 [2024-07-16 01:01:50.939052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.259 [2024-07-16 01:01:50.948341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.259 [2024-07-16 01:01:50.948774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.259 [2024-07-16 01:01:50.948805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.259 [2024-07-16 01:01:50.948823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.259 [2024-07-16 01:01:50.949072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.260 [2024-07-16 01:01:50.949314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.260 [2024-07-16 01:01:50.949337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.260 [2024-07-16 01:01:50.949352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.260 [2024-07-16 01:01:50.952929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.260 [2024-07-16 01:01:50.962202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.260 [2024-07-16 01:01:50.962657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.260 [2024-07-16 01:01:50.962688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.260 [2024-07-16 01:01:50.962705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.260 [2024-07-16 01:01:50.962955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.260 [2024-07-16 01:01:50.963198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.260 [2024-07-16 01:01:50.963227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.260 [2024-07-16 01:01:50.963242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.260 [2024-07-16 01:01:50.966812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.260 [2024-07-16 01:01:50.976093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.260 [2024-07-16 01:01:50.976554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.260 [2024-07-16 01:01:50.976585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.260 [2024-07-16 01:01:50.976602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.260 [2024-07-16 01:01:50.976840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.260 [2024-07-16 01:01:50.977094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.260 [2024-07-16 01:01:50.977118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.260 [2024-07-16 01:01:50.977133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.260 [2024-07-16 01:01:50.980716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.260 [2024-07-16 01:01:50.990010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.260 [2024-07-16 01:01:50.990457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.260 [2024-07-16 01:01:50.990488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.260 [2024-07-16 01:01:50.990505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.260 [2024-07-16 01:01:50.990743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.260 [2024-07-16 01:01:50.990996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.260 [2024-07-16 01:01:50.991020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.260 [2024-07-16 01:01:50.991035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.260 [2024-07-16 01:01:50.994614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.260 [2024-07-16 01:01:51.003897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.260 [2024-07-16 01:01:51.004346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.260 [2024-07-16 01:01:51.004377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.260 [2024-07-16 01:01:51.004394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.260 [2024-07-16 01:01:51.004631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.260 [2024-07-16 01:01:51.004873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.260 [2024-07-16 01:01:51.004906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.260 [2024-07-16 01:01:51.004922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.260 [2024-07-16 01:01:51.008493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.017790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.018230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.018260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.018278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.018515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.018756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.018779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.018794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.522 [2024-07-16 01:01:51.022382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.031664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.032131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.032162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.032179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.032417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.032659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.032682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.032697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.522 [2024-07-16 01:01:51.036276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.045548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.046005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.046036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.046054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.046292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.046533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.046556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.046571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.522 [2024-07-16 01:01:51.050151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.059461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.059920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.059952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.059969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.060217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.060458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.060481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.060497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.522 [2024-07-16 01:01:51.064081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.073362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.073817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.073847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.073865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.074111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.074353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.074376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.074390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.522 [2024-07-16 01:01:51.077964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.087244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.087698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.087729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.087746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.087997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.088239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.088262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.088276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.522 [2024-07-16 01:01:51.091847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.101147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.101584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.101614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.101632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.101869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.102121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.102144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.102165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.522 [2024-07-16 01:01:51.105734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.522 [2024-07-16 01:01:51.115013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.522 [2024-07-16 01:01:51.115451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.522 [2024-07-16 01:01:51.115482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.522 [2024-07-16 01:01:51.115499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.522 [2024-07-16 01:01:51.115737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.522 [2024-07-16 01:01:51.115989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.522 [2024-07-16 01:01:51.116013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.522 [2024-07-16 01:01:51.116028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.119594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.128862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.129322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.129353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.129370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.129607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.129848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.129872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.129897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.133467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.143043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.143493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.143526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.143545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.143790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.144064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.144090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.144105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.147763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.157005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.157480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.157512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.157530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.157768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.158024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.158049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.158064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.161636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.170929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.171363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.171395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.171413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.171650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.171903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.171928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.171943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.175511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.184792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.185220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.185251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.185269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.185506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.185748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.185771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.185786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.189368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.198663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.199144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.199175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.199193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.199438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.199693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.199722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.199739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.203439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.212824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.213305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.213338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.213356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.213602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.213852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.213888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.213911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.217595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.226689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.227163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.227196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.227214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.523 [2024-07-16 01:01:51.227451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.523 [2024-07-16 01:01:51.227694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.523 [2024-07-16 01:01:51.227717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.523 [2024-07-16 01:01:51.227732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.523 [2024-07-16 01:01:51.231318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.523 [2024-07-16 01:01:51.240604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.523 [2024-07-16 01:01:51.241094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.523 [2024-07-16 01:01:51.241125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.523 [2024-07-16 01:01:51.241142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.524 [2024-07-16 01:01:51.241380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.524 [2024-07-16 01:01:51.241622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.524 [2024-07-16 01:01:51.241645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.524 [2024-07-16 01:01:51.241660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.524 [2024-07-16 01:01:51.245270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.524 [2024-07-16 01:01:51.254555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.524 [2024-07-16 01:01:51.254988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.524 [2024-07-16 01:01:51.255019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.524 [2024-07-16 01:01:51.255037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.524 [2024-07-16 01:01:51.255274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.524 [2024-07-16 01:01:51.255516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.524 [2024-07-16 01:01:51.255539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.524 [2024-07-16 01:01:51.255554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.524 [2024-07-16 01:01:51.259143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.524 [2024-07-16 01:01:51.268423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.524 [2024-07-16 01:01:51.268895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.524 [2024-07-16 01:01:51.268926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.524 [2024-07-16 01:01:51.268944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.524 [2024-07-16 01:01:51.269181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.524 [2024-07-16 01:01:51.269423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.524 [2024-07-16 01:01:51.269446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.524 [2024-07-16 01:01:51.269461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.524 [2024-07-16 01:01:51.273048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.785 [2024-07-16 01:01:51.282342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.785 [2024-07-16 01:01:51.282895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.785 [2024-07-16 01:01:51.282943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.785 [2024-07-16 01:01:51.282961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.785 [2024-07-16 01:01:51.283198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.785 [2024-07-16 01:01:51.283440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.283463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.283478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.287066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.296349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.296843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.296886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.296907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.297145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.297386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.297410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.297424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.301005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.310278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.310705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.310735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.310753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.311003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.311245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.311269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.311284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.314856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.324149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.324698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.324747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.324765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.325016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.325259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.325282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.325297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.328867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.338157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.338592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.338624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.338642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.338894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.339143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.339167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.339182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.342750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.352057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.352516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.352547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.352564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.352801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.353065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.353089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.353104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.356676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.365966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.366431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.366462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.366479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.366717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.366970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.366994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.367009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.370581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.379864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.380332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.380363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.380380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.380618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.380859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.380894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.380911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.384480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.393754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.394195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.394226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.394244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.394482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.394723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.394747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.394761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.398339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.407609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.786 [2024-07-16 01:01:51.408090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.786 [2024-07-16 01:01:51.408121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.786 [2024-07-16 01:01:51.408138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.786 [2024-07-16 01:01:51.408376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.786 [2024-07-16 01:01:51.408618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.786 [2024-07-16 01:01:51.408641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.786 [2024-07-16 01:01:51.408656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.786 [2024-07-16 01:01:51.412239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.786 [2024-07-16 01:01:51.421511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.421979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.422011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.422028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.422266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.422507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.422531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.422546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.426125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.435393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.435862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.435899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.435922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.436160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.436402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.436425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.436440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.440020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.449295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.449735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.449766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.449783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.450033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.450275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.450299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.450313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.453893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.463162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.463622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.463652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.463669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.463919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.464161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.464185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.464199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.467765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.477041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.477490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.477521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.477538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.477776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.478028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.478058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.478074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.481642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.490921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.491387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.491417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.491435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.491672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.491925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.491949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.491964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.495530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.504802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.505282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.505313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.505330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.505568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.505809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.505832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.505846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.509425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.518699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.519172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.519202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.519219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.519457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.519699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.519722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.519737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.523314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.787 [2024-07-16 01:01:51.532586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.787 [2024-07-16 01:01:51.533050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.787 [2024-07-16 01:01:51.533082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:16.787 [2024-07-16 01:01:51.533099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:16.787 [2024-07-16 01:01:51.533337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:16.787 [2024-07-16 01:01:51.533579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.787 [2024-07-16 01:01:51.533602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.787 [2024-07-16 01:01:51.533616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.787 [2024-07-16 01:01:51.537194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.049 [2024-07-16 01:01:51.546484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.049 [2024-07-16 01:01:51.546915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.049 [2024-07-16 01:01:51.546946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.049 [2024-07-16 01:01:51.546964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.049 [2024-07-16 01:01:51.547201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.049 [2024-07-16 01:01:51.547443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.049 [2024-07-16 01:01:51.547466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.049 [2024-07-16 01:01:51.547481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.049 [2024-07-16 01:01:51.551062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.049 [2024-07-16 01:01:51.560338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.049 [2024-07-16 01:01:51.560769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.049 [2024-07-16 01:01:51.560800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.049 [2024-07-16 01:01:51.560817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.049 [2024-07-16 01:01:51.561065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.049 [2024-07-16 01:01:51.561308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.049 [2024-07-16 01:01:51.561331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.049 [2024-07-16 01:01:51.561345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.049 [2024-07-16 01:01:51.564923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.049 [2024-07-16 01:01:51.574209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.049 [2024-07-16 01:01:51.574681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.049 [2024-07-16 01:01:51.574712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.049 [2024-07-16 01:01:51.574729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.049 [2024-07-16 01:01:51.574982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.049 [2024-07-16 01:01:51.575224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.049 [2024-07-16 01:01:51.575247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.049 [2024-07-16 01:01:51.575262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.578830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.588121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.588532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.588563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.588581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.588818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.589069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.589093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.589108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.592676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.601796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.602206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.602234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.602250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.602488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.602687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.602706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.602717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.605833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.615303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.615766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.615807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.615824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.616047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.616288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.616308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.616325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.619405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.628627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.629048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.629076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.629092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.629333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.629547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.629565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.629577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.632564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.641892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.642346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.642389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.642405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.642641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.642839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.642872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.642894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.645872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.655178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.655950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.656003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.656021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.656258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.656458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.656477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.656489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.659476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.668418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.668906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.668956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.668973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.669213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.669411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.669430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.669442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.672425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.681732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.682218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.682262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.682278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.682528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.682726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.682745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.682757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.685751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.695025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.695463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.695490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.695520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.050 [2024-07-16 01:01:51.695774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.050 [2024-07-16 01:01:51.696000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.050 [2024-07-16 01:01:51.696021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.050 [2024-07-16 01:01:51.696033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.050 [2024-07-16 01:01:51.699044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.050 [2024-07-16 01:01:51.708329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.050 [2024-07-16 01:01:51.708767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.050 [2024-07-16 01:01:51.708795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.050 [2024-07-16 01:01:51.708811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.709060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.709282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.709302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.709314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.712298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.051 [2024-07-16 01:01:51.721561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.051 [2024-07-16 01:01:51.722011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.051 [2024-07-16 01:01:51.722053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.051 [2024-07-16 01:01:51.722069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.722319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.722517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.722536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.722548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.725567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.051 [2024-07-16 01:01:51.734830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.051 [2024-07-16 01:01:51.735307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.051 [2024-07-16 01:01:51.735334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.051 [2024-07-16 01:01:51.735349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.735567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.735781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.735800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.735812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.738778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.051 [2024-07-16 01:01:51.748056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.051 [2024-07-16 01:01:51.748572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.051 [2024-07-16 01:01:51.748598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.051 [2024-07-16 01:01:51.748628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.748859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.749096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.749116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.749129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.752114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.051 [2024-07-16 01:01:51.761384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.051 [2024-07-16 01:01:51.761884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.051 [2024-07-16 01:01:51.761912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.051 [2024-07-16 01:01:51.761928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.762142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.762374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.762393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.762406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.765389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.051 [2024-07-16 01:01:51.774681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.051 [2024-07-16 01:01:51.775098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.051 [2024-07-16 01:01:51.775126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.051 [2024-07-16 01:01:51.775142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.775380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.775594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.775613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.775625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.778604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.051 [2024-07-16 01:01:51.788044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.051 [2024-07-16 01:01:51.788534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.051 [2024-07-16 01:01:51.788561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.051 [2024-07-16 01:01:51.788593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.788845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.789073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.789093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.789106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.792086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.051 [2024-07-16 01:01:51.801444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.051 [2024-07-16 01:01:51.801881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.051 [2024-07-16 01:01:51.801909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.051 [2024-07-16 01:01:51.801951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.051 [2024-07-16 01:01:51.802171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.051 [2024-07-16 01:01:51.802387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.051 [2024-07-16 01:01:51.802407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.051 [2024-07-16 01:01:51.802419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.051 [2024-07-16 01:01:51.805652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.814807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.815230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.815258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.815274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.815503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.815727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.815748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.815760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.819129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.828089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.828590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.828632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.828648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.828915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.829127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.829147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.829175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.832194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.841335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.841779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.841820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.841836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.842083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.842299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.842323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.842336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.845331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.854602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.855087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.855116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.855132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.855387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.855585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.855604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.855616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.858648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.867782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.868306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.868348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.868365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.868617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.868816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.868835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.868847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.871887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.881025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.881520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.881562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.881578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.881816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.882023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.882043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.882055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.885012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.894290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.894773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.894800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.894831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.895081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.895299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.895318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.895330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.898308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.907559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.908036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.908064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.908080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.908307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.908520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.908540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.908552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.911559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.920814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.921230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.315 [2024-07-16 01:01:51.921258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.315 [2024-07-16 01:01:51.921274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.315 [2024-07-16 01:01:51.921515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.315 [2024-07-16 01:01:51.921729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.315 [2024-07-16 01:01:51.921748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.315 [2024-07-16 01:01:51.921760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.315 [2024-07-16 01:01:51.924779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.315 [2024-07-16 01:01:51.934055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.315 [2024-07-16 01:01:51.934536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:51.934577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:51.934598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:51.934832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:51.935059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:51.935080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:51.935092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:51.938069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:51.947335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:51.947787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:51.947814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:51.947844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:51.948095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:51.948311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:51.948331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:51.948343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:51.951318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:51.960611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:51.961086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:51.961114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:51.961130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:51.961384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:51.961583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:51.961602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:51.961614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:51.964589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:51.973844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:51.974281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:51.974308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:51.974323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:51.974579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:51.974796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:51.974820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:51.974833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:51.977848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:51.987129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:51.987648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:51.987676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:51.987691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:51.987940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:51.988145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:51.988164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:51.988177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:51.991154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:52.000393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:52.000792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:52.000818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:52.000833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:52.001110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:52.001327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:52.001346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:52.001358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:52.004336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:52.013589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:52.014054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:52.014082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:52.014097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:52.014339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:52.014543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:52.014562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:52.014575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:52.017589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:52.026853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:52.027310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:52.027353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:52.027368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:52.027622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:52.027820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:52.027839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:52.027851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:52.030838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:52.040110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:52.040609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:52.040636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:52.040667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:52.040917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:52.041121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:52.041141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:52.041153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:52.044130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:52.053363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:52.053926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:52.053968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:52.053985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:52.054219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:52.054417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:52.054436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:52.054448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.316 [2024-07-16 01:01:52.057432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.316 [2024-07-16 01:01:52.066796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.316 [2024-07-16 01:01:52.067204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.316 [2024-07-16 01:01:52.067232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.316 [2024-07-16 01:01:52.067262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.316 [2024-07-16 01:01:52.067506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.316 [2024-07-16 01:01:52.067735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.316 [2024-07-16 01:01:52.067757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.316 [2024-07-16 01:01:52.067770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.577 [2024-07-16 01:01:52.071164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.577 [2024-07-16 01:01:52.080248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.577 [2024-07-16 01:01:52.080633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.577 [2024-07-16 01:01:52.080673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.577 [2024-07-16 01:01:52.080687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.577 [2024-07-16 01:01:52.081129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.577 [2024-07-16 01:01:52.081349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.577 [2024-07-16 01:01:52.081385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.577 [2024-07-16 01:01:52.081397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.577 [2024-07-16 01:01:52.084451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.577 [2024-07-16 01:01:52.093490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.577 [2024-07-16 01:01:52.093939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.577 [2024-07-16 01:01:52.093982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.577 [2024-07-16 01:01:52.093998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.577 [2024-07-16 01:01:52.094251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.577 [2024-07-16 01:01:52.094449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.577 [2024-07-16 01:01:52.094468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.577 [2024-07-16 01:01:52.094480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.577 [2024-07-16 01:01:52.097460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.577 [2024-07-16 01:01:52.106916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.577 [2024-07-16 01:01:52.107328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.577 [2024-07-16 01:01:52.107355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.577 [2024-07-16 01:01:52.107370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.577 [2024-07-16 01:01:52.107638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.577 [2024-07-16 01:01:52.107836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.577 [2024-07-16 01:01:52.107855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.577 [2024-07-16 01:01:52.107873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.577 [2024-07-16 01:01:52.110871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.577 [2024-07-16 01:01:52.120135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.577 [2024-07-16 01:01:52.120526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.577 [2024-07-16 01:01:52.120552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.577 [2024-07-16 01:01:52.120567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.577 [2024-07-16 01:01:52.120815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.577 [2024-07-16 01:01:52.121044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.577 [2024-07-16 01:01:52.121065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.577 [2024-07-16 01:01:52.121077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.577 [2024-07-16 01:01:52.124053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.577 [2024-07-16 01:01:52.133452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.577 [2024-07-16 01:01:52.133854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.577 [2024-07-16 01:01:52.133902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.577 [2024-07-16 01:01:52.133920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.577 [2024-07-16 01:01:52.134163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.577 [2024-07-16 01:01:52.134376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.134396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.134408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.137388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.146789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.147254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.147282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.147298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.147550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.147748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.147767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.147780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.150745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.160027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.160514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.160560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.160577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.160831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.161078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.161099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.161112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.164113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.173379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.173765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.173792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.173807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.174072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.174288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.174308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.174320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.177350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.186554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.186959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.186988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.187003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.187242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.187440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.187459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.187471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.190454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.199890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.200394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.200420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.200435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.200655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.200899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.200920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.200933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.203907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.213201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.213705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.213733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.213748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.213999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.214217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.214236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.214248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.217228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.226521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.226910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.226951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.226968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.227196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.227410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.227430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.227441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.230422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.239912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.240440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.240482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.240498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.240750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.240979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.241001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.241014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.244038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.253305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.253706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.253732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.253746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.578 [2024-07-16 01:01:52.254013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.578 [2024-07-16 01:01:52.254257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.578 [2024-07-16 01:01:52.254277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.578 [2024-07-16 01:01:52.254290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.578 [2024-07-16 01:01:52.257312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.578 [2024-07-16 01:01:52.266500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.578 [2024-07-16 01:01:52.266951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.578 [2024-07-16 01:01:52.266993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.578 [2024-07-16 01:01:52.267010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.579 [2024-07-16 01:01:52.267260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.579 [2024-07-16 01:01:52.267457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.579 [2024-07-16 01:01:52.267476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.579 [2024-07-16 01:01:52.267488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.579 [2024-07-16 01:01:52.270469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.579 [2024-07-16 01:01:52.279836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.579 [2024-07-16 01:01:52.280374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.579 [2024-07-16 01:01:52.280417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.579 [2024-07-16 01:01:52.280433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.579 [2024-07-16 01:01:52.280689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.579 [2024-07-16 01:01:52.280923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.579 [2024-07-16 01:01:52.280945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.579 [2024-07-16 01:01:52.280957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.579 [2024-07-16 01:01:52.284025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.579 [2024-07-16 01:01:52.293192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.579 [2024-07-16 01:01:52.293650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.579 [2024-07-16 01:01:52.293696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.579 [2024-07-16 01:01:52.293721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.579 [2024-07-16 01:01:52.294008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.579 [2024-07-16 01:01:52.294253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.579 [2024-07-16 01:01:52.294276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.579 [2024-07-16 01:01:52.294291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.579 [2024-07-16 01:01:52.297870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.579 [2024-07-16 01:01:52.307156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.579 [2024-07-16 01:01:52.307593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.579 [2024-07-16 01:01:52.307625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.579 [2024-07-16 01:01:52.307643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.579 [2024-07-16 01:01:52.307890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.579 [2024-07-16 01:01:52.308134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.579 [2024-07-16 01:01:52.308158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.579 [2024-07-16 01:01:52.308172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.579 [2024-07-16 01:01:52.311743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.579 [2024-07-16 01:01:52.321035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.579 [2024-07-16 01:01:52.321480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.579 [2024-07-16 01:01:52.321510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.579 [2024-07-16 01:01:52.321528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.579 [2024-07-16 01:01:52.321766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.579 [2024-07-16 01:01:52.322018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.579 [2024-07-16 01:01:52.322042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.579 [2024-07-16 01:01:52.322058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.579 [2024-07-16 01:01:52.325624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.841 [2024-07-16 01:01:52.334910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.841 [2024-07-16 01:01:52.335541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-07-16 01:01:52.335604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.841 [2024-07-16 01:01:52.335621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.841 [2024-07-16 01:01:52.335859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.841 [2024-07-16 01:01:52.336110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.841 [2024-07-16 01:01:52.336140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.841 [2024-07-16 01:01:52.336156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.841 [2024-07-16 01:01:52.339729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.841 [2024-07-16 01:01:52.348865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.841 [2024-07-16 01:01:52.349357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-07-16 01:01:52.349388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.841 [2024-07-16 01:01:52.349406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.841 [2024-07-16 01:01:52.349651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.841 [2024-07-16 01:01:52.349913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.841 [2024-07-16 01:01:52.349938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.841 [2024-07-16 01:01:52.349953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.841 [2024-07-16 01:01:52.353628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.841 [2024-07-16 01:01:52.363025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.841 [2024-07-16 01:01:52.363489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-07-16 01:01:52.363528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.841 [2024-07-16 01:01:52.363549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.841 [2024-07-16 01:01:52.363795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.841 [2024-07-16 01:01:52.364060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.841 [2024-07-16 01:01:52.364087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.841 [2024-07-16 01:01:52.364107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.841 [2024-07-16 01:01:52.367784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.841 [2024-07-16 01:01:52.376985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.841 [2024-07-16 01:01:52.377413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-07-16 01:01:52.377444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.841 [2024-07-16 01:01:52.377462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.841 [2024-07-16 01:01:52.377699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.841 [2024-07-16 01:01:52.377954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.841 [2024-07-16 01:01:52.377978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.841 [2024-07-16 01:01:52.377993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.841 [2024-07-16 01:01:52.381570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.841 [2024-07-16 01:01:52.390870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.841 [2024-07-16 01:01:52.391342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-07-16 01:01:52.391373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.841 [2024-07-16 01:01:52.391391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.841 [2024-07-16 01:01:52.391628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.841 [2024-07-16 01:01:52.391870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.841 [2024-07-16 01:01:52.391905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.841 [2024-07-16 01:01:52.391920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.841 [2024-07-16 01:01:52.395495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.841 [2024-07-16 01:01:52.404776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.405251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.405282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.405300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.405537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.405778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.405802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.405816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.409401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.418683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.419151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.419182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.419199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.419436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.419679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.419702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.419717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.423299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.432577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.433035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.433065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.433083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.433330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.433572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.433595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.433610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.437194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.446488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.446966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.446998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.447015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.447252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.447494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.447517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.447532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.451114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.460389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.460849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.460886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.460906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.461144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.461385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.461408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.461423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.465002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.474274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.474729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.474760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.474777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.475028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.475271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.475294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.475315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.478895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.488190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.488659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.488690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.488708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.488958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.489201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.489224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.489239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.492811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.502107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.502578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.502609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.502626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.502864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.503116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.503140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.503155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.506727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.516016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.516449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.516480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.516497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.516735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.516989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.517014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.517029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.520601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.529898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.530362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.530393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.530410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.530648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.530903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.530927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.530942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.534510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.842 [2024-07-16 01:01:52.543795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.842 [2024-07-16 01:01:52.544266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-07-16 01:01:52.544297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.842 [2024-07-16 01:01:52.544314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.842 [2024-07-16 01:01:52.544552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.842 [2024-07-16 01:01:52.544794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.842 [2024-07-16 01:01:52.544817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.842 [2024-07-16 01:01:52.544832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.842 [2024-07-16 01:01:52.548414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.843 [2024-07-16 01:01:52.557713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.843 [2024-07-16 01:01:52.558164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-07-16 01:01:52.558195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.843 [2024-07-16 01:01:52.558212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.843 [2024-07-16 01:01:52.558450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.843 [2024-07-16 01:01:52.558691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.843 [2024-07-16 01:01:52.558715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.843 [2024-07-16 01:01:52.558730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.843 [2024-07-16 01:01:52.562315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.843 [2024-07-16 01:01:52.571596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.843 [2024-07-16 01:01:52.572066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-07-16 01:01:52.572096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.843 [2024-07-16 01:01:52.572114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.843 [2024-07-16 01:01:52.572357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.843 [2024-07-16 01:01:52.572600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.843 [2024-07-16 01:01:52.572623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.843 [2024-07-16 01:01:52.572637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.843 [2024-07-16 01:01:52.576217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.843 [2024-07-16 01:01:52.585495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.843 [2024-07-16 01:01:52.585952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-07-16 01:01:52.585983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:17.843 [2024-07-16 01:01:52.586000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:17.843 [2024-07-16 01:01:52.586238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:17.843 [2024-07-16 01:01:52.586479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.843 [2024-07-16 01:01:52.586503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.843 [2024-07-16 01:01:52.586517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.843 [2024-07-16 01:01:52.590096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.104 [2024-07-16 01:01:52.599376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.104 [2024-07-16 01:01:52.599940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.104 [2024-07-16 01:01:52.599971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.104 [2024-07-16 01:01:52.599989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.104 [2024-07-16 01:01:52.600227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.104 [2024-07-16 01:01:52.600476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.104 [2024-07-16 01:01:52.600499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.104 [2024-07-16 01:01:52.600514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.104 [2024-07-16 01:01:52.604101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.104 [2024-07-16 01:01:52.613379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.104 [2024-07-16 01:01:52.613817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.104 [2024-07-16 01:01:52.613849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.104 [2024-07-16 01:01:52.613866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.104 [2024-07-16 01:01:52.614114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.104 [2024-07-16 01:01:52.614357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.104 [2024-07-16 01:01:52.614380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.104 [2024-07-16 01:01:52.614401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.104 [2024-07-16 01:01:52.617980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.104 [2024-07-16 01:01:52.627264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.104 [2024-07-16 01:01:52.627719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.104 [2024-07-16 01:01:52.627749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.104 [2024-07-16 01:01:52.627767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.104 [2024-07-16 01:01:52.628017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.628259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.628282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.628297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.631871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.641166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.641598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.641628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.641646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.641895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.642137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.642161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.642175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.645748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.655047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.655489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.655520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.655537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.655775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.656029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.656053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.656068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.659640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.668935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.669409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.669445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.669463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.669700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.669957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.669981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.669996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.673567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.682846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.683310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.683341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.683358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.683595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.683836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.683859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.683874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.687459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.696736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.697151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.697181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.697199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.697436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.697678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.697701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.697715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.701298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.710570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.711024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.711055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.711072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.711309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.711557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.711580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.711595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.715183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.724476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.724921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.724953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.724970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.725208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.725450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.725474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.725488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.729069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.738353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.738814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.738845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.738863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.739110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.739351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.739376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.739390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.742971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.752287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.752768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.752799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.752816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.753063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.753306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.753329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.753344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.756935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.766217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.766686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.766716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.766733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.766980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.105 [2024-07-16 01:01:52.767223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.105 [2024-07-16 01:01:52.767246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.105 [2024-07-16 01:01:52.767261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.105 [2024-07-16 01:01:52.770835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.105 [2024-07-16 01:01:52.780127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.105 [2024-07-16 01:01:52.780774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.105 [2024-07-16 01:01:52.780834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.105 [2024-07-16 01:01:52.780851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.105 [2024-07-16 01:01:52.781098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.106 [2024-07-16 01:01:52.781341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.106 [2024-07-16 01:01:52.781364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.106 [2024-07-16 01:01:52.781378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.106 [2024-07-16 01:01:52.784962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.106 [2024-07-16 01:01:52.794031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.106 [2024-07-16 01:01:52.794652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.106 [2024-07-16 01:01:52.794702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.106 [2024-07-16 01:01:52.794720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.106 [2024-07-16 01:01:52.794967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.106 [2024-07-16 01:01:52.795209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.106 [2024-07-16 01:01:52.795232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.106 [2024-07-16 01:01:52.795247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.106 [2024-07-16 01:01:52.798819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2745586 Killed "${NVMF_APP[@]}" "$@" 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.106 [2024-07-16 01:01:52.807898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.106 [2024-07-16 01:01:52.808356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.106 [2024-07-16 01:01:52.808386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.106 [2024-07-16 01:01:52.808403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.106 [2024-07-16 01:01:52.808641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.106 [2024-07-16 01:01:52.808892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.106 [2024-07-16 01:01:52.808916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.106 [2024-07-16 01:01:52.808931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2746608 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2746608 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2746608 ']' 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.106 01:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.106 [2024-07-16 01:01:52.812502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.106 [2024-07-16 01:01:52.821790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.106 [2024-07-16 01:01:52.822232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.106 [2024-07-16 01:01:52.822263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.106 [2024-07-16 01:01:52.822280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.106 [2024-07-16 01:01:52.822518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.106 [2024-07-16 01:01:52.822759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.106 [2024-07-16 01:01:52.822783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.106 [2024-07-16 01:01:52.822798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.106 [2024-07-16 01:01:52.826378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.106 [2024-07-16 01:01:52.835663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.106 [2024-07-16 01:01:52.836105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.106 [2024-07-16 01:01:52.836136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.106 [2024-07-16 01:01:52.836153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.106 [2024-07-16 01:01:52.836396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.106 [2024-07-16 01:01:52.836638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.106 [2024-07-16 01:01:52.836662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.106 [2024-07-16 01:01:52.836677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.106 [2024-07-16 01:01:52.840260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.106 [2024-07-16 01:01:52.849545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.106 [2024-07-16 01:01:52.850011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.106 [2024-07-16 01:01:52.850043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.106 [2024-07-16 01:01:52.850061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.106 [2024-07-16 01:01:52.850299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.106 [2024-07-16 01:01:52.850541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.106 [2024-07-16 01:01:52.850565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.106 [2024-07-16 01:01:52.850580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.106 [2024-07-16 01:01:52.854161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 [2024-07-16 01:01:52.863450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.863623] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:18.368 [2024-07-16 01:01:52.863693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.368 [2024-07-16 01:01:52.863915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.863946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.863963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.864200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.864442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.368 [2024-07-16 01:01:52.864466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.368 [2024-07-16 01:01:52.864481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.368 [2024-07-16 01:01:52.868057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 [2024-07-16 01:01:52.877338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.877776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.877807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.877824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.878078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.878321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.368 [2024-07-16 01:01:52.878344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.368 [2024-07-16 01:01:52.878359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.368 [2024-07-16 01:01:52.882109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 [2024-07-16 01:01:52.891178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.891637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.891668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.891685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.891933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.892175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.368 [2024-07-16 01:01:52.892199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.368 [2024-07-16 01:01:52.892213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.368 [2024-07-16 01:01:52.895786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.368 [2024-07-16 01:01:52.905069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.905501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.905531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.905548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.905786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.906036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.368 [2024-07-16 01:01:52.906060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.368 [2024-07-16 01:01:52.906075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.368 [2024-07-16 01:01:52.909655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 [2024-07-16 01:01:52.918939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.919379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.919411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.919428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.919666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.919920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.368 [2024-07-16 01:01:52.919945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.368 [2024-07-16 01:01:52.919965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.368 [2024-07-16 01:01:52.923538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 [2024-07-16 01:01:52.932820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.933260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.933290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.933308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.933545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.933787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.368 [2024-07-16 01:01:52.933811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.368 [2024-07-16 01:01:52.933826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.368 [2024-07-16 01:01:52.937404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 [2024-07-16 01:01:52.940013] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:18.368 [2024-07-16 01:01:52.946688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.947242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.947279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.947298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.947541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.947786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.368 [2024-07-16 01:01:52.947810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.368 [2024-07-16 01:01:52.947827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.368 [2024-07-16 01:01:52.951411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.368 [2024-07-16 01:01:52.960705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.368 [2024-07-16 01:01:52.961322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.368 [2024-07-16 01:01:52.961377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.368 [2024-07-16 01:01:52.961401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.368 [2024-07-16 01:01:52.961658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.368 [2024-07-16 01:01:52.961919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:52.961944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:52.961963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:52.965535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:52.974614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:52.975081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:52.975115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:52.975133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:52.975372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:52.975614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:52.975638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:52.975654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:52.979234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:52.988521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:52.988994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:52.989027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:52.989045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:52.989283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:52.989525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:52.989549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:52.989565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:52.993148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.002419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.002916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.002949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.002967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.003206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.003449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.003472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.003488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:53.007073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.016368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.016988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.017030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.017053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.017317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.017566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.017591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.017609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:53.021187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.030259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.030717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.030748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.030765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.031016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.031258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.031282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.031297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:53.034864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.044142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.044618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.044651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.044669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.044922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.045165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.045189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.045204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:53.048773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.058059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.058502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.058533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.058550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.058789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.059043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.059068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.059092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:53.061711] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.369 [2024-07-16 01:01:53.061747] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.369 [2024-07-16 01:01:53.061762] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.369 [2024-07-16 01:01:53.061776] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.369 [2024-07-16 01:01:53.061787] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.369 [2024-07-16 01:01:53.061871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.369 [2024-07-16 01:01:53.061927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.369 [2024-07-16 01:01:53.061931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.369 [2024-07-16 01:01:53.062664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.071962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.072572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.072613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.072636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.072895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.073144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.073168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.073187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:53.076765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.085856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.086485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.086527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.086549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.086800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.087057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.087082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.087101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.369 [2024-07-16 01:01:53.090676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.369 [2024-07-16 01:01:53.099765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.369 [2024-07-16 01:01:53.100426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.369 [2024-07-16 01:01:53.100475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.369 [2024-07-16 01:01:53.100500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.369 [2024-07-16 01:01:53.100772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.369 [2024-07-16 01:01:53.101032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.369 [2024-07-16 01:01:53.101058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.369 [2024-07-16 01:01:53.101077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.370 [2024-07-16 01:01:53.104649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.370 [2024-07-16 01:01:53.113842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.370 [2024-07-16 01:01:53.114563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.370 [2024-07-16 01:01:53.114627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.370 [2024-07-16 01:01:53.114652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.370 [2024-07-16 01:01:53.114926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.370 [2024-07-16 01:01:53.115178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.370 [2024-07-16 01:01:53.115203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.370 [2024-07-16 01:01:53.115223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.370 [2024-07-16 01:01:53.118796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.127884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 [2024-07-16 01:01:53.128418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.634 [2024-07-16 01:01:53.128458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.634 [2024-07-16 01:01:53.128480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.634 [2024-07-16 01:01:53.128728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.634 [2024-07-16 01:01:53.128985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.634 [2024-07-16 01:01:53.129011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.634 [2024-07-16 01:01:53.129029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.634 [2024-07-16 01:01:53.132601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.141904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 [2024-07-16 01:01:53.142614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.634 [2024-07-16 01:01:53.142680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.634 [2024-07-16 01:01:53.142706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.634 [2024-07-16 01:01:53.142984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.634 [2024-07-16 01:01:53.143237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.634 [2024-07-16 01:01:53.143262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.634 [2024-07-16 01:01:53.143302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.634 [2024-07-16 01:01:53.146885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.155974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 [2024-07-16 01:01:53.156466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.634 [2024-07-16 01:01:53.156499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.634 [2024-07-16 01:01:53.156517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.634 [2024-07-16 01:01:53.156756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.634 [2024-07-16 01:01:53.157012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.634 [2024-07-16 01:01:53.157037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.634 [2024-07-16 01:01:53.157052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.634 [2024-07-16 01:01:53.160622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.169906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 [2024-07-16 01:01:53.170383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.634 [2024-07-16 01:01:53.170415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.634 [2024-07-16 01:01:53.170433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.634 [2024-07-16 01:01:53.170671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.634 [2024-07-16 01:01:53.170924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.634 [2024-07-16 01:01:53.170949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.634 [2024-07-16 01:01:53.170964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.634 [2024-07-16 01:01:53.174378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.183425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 [2024-07-16 01:01:53.183848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.634 [2024-07-16 01:01:53.183883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.634 [2024-07-16 01:01:53.183901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.634 [2024-07-16 01:01:53.184116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.634 [2024-07-16 01:01:53.184335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.634 [2024-07-16 01:01:53.184357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.634 [2024-07-16 01:01:53.184370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.634 [2024-07-16 01:01:53.187642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.196997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 [2024-07-16 01:01:53.197424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.634 [2024-07-16 01:01:53.197453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.634 [2024-07-16 01:01:53.197469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.634 [2024-07-16 01:01:53.197682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.634 [2024-07-16 01:01:53.197937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.634 [2024-07-16 01:01:53.197959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.634 [2024-07-16 01:01:53.197972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.634 [2024-07-16 01:01:53.201276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.210493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.634 [2024-07-16 01:01:53.210916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.634 [2024-07-16 01:01:53.210957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.634 [2024-07-16 01:01:53.210973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.634 [2024-07-16 01:01:53.211187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.634 [2024-07-16 01:01:53.211421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.634 [2024-07-16 01:01:53.211443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.634 [2024-07-16 01:01:53.211456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.634 [2024-07-16 01:01:53.214737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.634 [2024-07-16 01:01:53.216534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.634 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.634 [2024-07-16 01:01:53.224011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.634 [2024-07-16 01:01:53.224461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.635 [2024-07-16 01:01:53.224504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.635 [2024-07-16 01:01:53.224520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.635 [2024-07-16 01:01:53.224753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.635 [2024-07-16 01:01:53.224992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.635 [2024-07-16 01:01:53.225015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.635 [2024-07-16 01:01:53.225028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.635 [2024-07-16 01:01:53.228282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.635 [2024-07-16 01:01:53.237559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.635 [2024-07-16 01:01:53.237978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.635 [2024-07-16 01:01:53.238006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.635 [2024-07-16 01:01:53.238022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.635 [2024-07-16 01:01:53.238252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.635 [2024-07-16 01:01:53.238465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.635 [2024-07-16 01:01:53.238485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.635 [2024-07-16 01:01:53.238499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.635 [2024-07-16 01:01:53.241740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.635 [2024-07-16 01:01:53.251107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.635 [2024-07-16 01:01:53.251753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.635 [2024-07-16 01:01:53.251797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.635 [2024-07-16 01:01:53.251820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.635 [2024-07-16 01:01:53.252057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.635 [2024-07-16 01:01:53.252295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.635 [2024-07-16 01:01:53.252317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.635 [2024-07-16 01:01:53.252336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.635 [2024-07-16 01:01:53.255496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.635 Malloc0 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.635 [2024-07-16 01:01:53.264797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.635 [2024-07-16 01:01:53.265212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.635 [2024-07-16 01:01:53.265249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eefe0 with addr=10.0.0.2, port=4420 00:25:18.635 [2024-07-16 01:01:53.265266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefe0 is same with the state(5) to be set 00:25:18.635 [2024-07-16 01:01:53.265481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefe0 (9): Bad file descriptor 00:25:18.635 [2024-07-16 01:01:53.265699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.635 [2024-07-16 01:01:53.265720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.635 [2024-07-16 01:01:53.265733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.635 [2024-07-16 01:01:53.268976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.635 [2024-07-16 01:01:53.276251] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.635 [2024-07-16 01:01:53.278416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.635 01:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2745874 00:25:18.635 [2024-07-16 01:01:53.313608] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:28.606 00:25:28.606 Latency(us) 00:25:28.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.606 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:28.606 Verification LBA range: start 0x0 length 0x4000 00:25:28.606 Nvme1n1 : 15.01 6279.22 24.53 8861.42 0.00 8428.70 1553.45 17670.45 00:25:28.606 =================================================================================================================== 00:25:28.606 Total : 6279.22 24.53 8861.42 0.00 8428.70 1553.45 17670.45 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.606 rmmod nvme_tcp 00:25:28.606 rmmod nvme_fabrics 00:25:28.606 rmmod nvme_keyring 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2746608 ']' 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2746608 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2746608 ']' 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2746608 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2746608 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2746608' 00:25:28.606 killing process with pid 2746608 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2746608 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2746608 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.606 01:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.530 01:02:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.530 00:25:30.530 real 0m22.445s 00:25:30.530 user 0m59.261s 00:25:30.530 sys 0m4.646s 00:25:30.530 01:02:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.530 01:02:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.530 ************************************ 00:25:30.530 END TEST nvmf_bdevperf 00:25:30.530 ************************************ 00:25:30.530 01:02:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:30.530 01:02:05 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:30.530 01:02:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:30.530 01:02:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.531 01:02:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.531 ************************************ 00:25:30.531 START TEST nvmf_target_disconnect 00:25:30.531 ************************************ 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:30.531 * Looking for test storage... 00:25:30.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:30.531 01:02:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.434 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.435 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.693 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.693 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.693 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.693 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:25:32.693 00:25:32.693 --- 10.0.0.2 ping statistics --- 00:25:32.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.693 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:25:32.694 00:25:32.694 --- 10.0.0.1 ping statistics --- 00:25:32.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.694 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:32.694 ************************************ 00:25:32.694 START TEST nvmf_target_disconnect_tc1 00:25:32.694 ************************************ 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.694 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.694 [2024-07-16 01:02:07.390221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.694 [2024-07-16 01:02:07.390302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedf340 with addr=10.0.0.2, port=4420 00:25:32.694 [2024-07-16 01:02:07.390358] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:32.694 [2024-07-16 01:02:07.390378] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:32.694 [2024-07-16 01:02:07.390390] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:32.694 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:32.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:32.694 Initializing NVMe Controllers 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:32.694 00:25:32.694 real 0m0.092s 00:25:32.694 user 0m0.042s 00:25:32.694 sys 0m0.049s 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:32.694 ************************************ 00:25:32.694 END TEST nvmf_target_disconnect_tc1 00:25:32.694 ************************************ 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.694 01:02:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:32.955 ************************************ 00:25:32.955 START TEST nvmf_target_disconnect_tc2 00:25:32.955 ************************************ 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2749692 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2749692 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2749692 ']' 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.955 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.955 [2024-07-16 01:02:07.505714] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:32.955 [2024-07-16 01:02:07.505793] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.955 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.955 [2024-07-16 01:02:07.573584] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.955 [2024-07-16 01:02:07.684252] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.955 [2024-07-16 01:02:07.684304] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.955 [2024-07-16 01:02:07.684332] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.955 [2024-07-16 01:02:07.684343] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.955 [2024-07-16 01:02:07.684352] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.955 [2024-07-16 01:02:07.684441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:32.955 [2024-07-16 01:02:07.684502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:32.955 [2024-07-16 01:02:07.684570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:32.955 [2024-07-16 01:02:07.684573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.214 Malloc0 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.214 [2024-07-16 01:02:07.877435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.214 [2024-07-16 01:02:07.905704] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2749836 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:33.214 01:02:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:33.214 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.801 01:02:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2749692 00:25:35.801 01:02:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 [2024-07-16 01:02:09.932969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Write completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 [2024-07-16 01:02:09.933305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.801 starting I/O failed 00:25:35.801 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 [2024-07-16 01:02:09.933687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Read completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 Write completed with error (sct=0, sc=8) 00:25:35.802 starting I/O failed 00:25:35.802 [2024-07-16 01:02:09.934050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:35.802 [2024-07-16 01:02:09.934290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.934325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.934537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.934563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.934958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.934984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.935156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.935182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.935360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.935386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.935547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.935573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.935792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.935817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.936028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.936068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.936247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.936280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.936501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.936528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.936712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.936737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.936926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.936952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.937111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.937137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.937325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.937350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.937555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.937579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.937762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.937787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.937955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.937981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.938141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.938166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.938318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.802 [2024-07-16 01:02:09.938343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.802 qpair failed and we were unable to recover it. 00:25:35.802 [2024-07-16 01:02:09.938495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.938535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.938733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.938764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.938940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.938966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.939152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.939177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.939327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.939352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.939531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.939556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.939739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.939764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.939920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.939945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.940131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.940156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.940337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.940362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.940535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.940560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.940732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.940758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.940951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.940976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.941157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.941182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.941355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.941380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.941593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.941619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.941795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.941820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.942008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.942034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.942193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.942218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.942397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.942422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.942564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.942605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.942845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.942870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.943053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.943080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.943238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.943263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.943414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.943440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.943641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.943666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.943848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.943873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.944043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.944069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.944238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.944278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.944472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.944499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.944683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.944708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.944893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.944919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.945126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.945151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.945329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.945354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.945518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.945545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.945863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.945939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.946132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.946157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.946305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.946331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.946512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.946553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.946739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.946764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.803 [2024-07-16 01:02:09.946930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.803 [2024-07-16 01:02:09.946956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.803 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.947119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.947149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.947414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.947442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.947711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.947754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.947952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.947978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.948126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.948152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.948333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.948359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.948540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.948564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.948744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.948769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.948949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.948975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.949122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.949148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.949327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.949352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.949532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.949557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.949750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.949789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.949997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.950036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.950224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.950256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.950539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.950565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.950709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.950734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.950919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.950946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.951103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.951129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.951274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.951300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.951505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.951536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.951816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.951868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.952074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.952100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.952306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.952331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.952558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.952586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.952813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.952838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.953031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.953057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.953250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.953289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.953506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.953533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.953690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.953715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.953930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.953956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.954114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.954139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.954339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.954364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.954555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.954580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.954726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.954751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.954939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.954966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.955122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.955148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.955309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.955335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.955530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.955573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.955765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.955790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.804 qpair failed and we were unable to recover it. 00:25:35.804 [2024-07-16 01:02:09.955992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.804 [2024-07-16 01:02:09.956024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.956230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.956255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.956453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.956495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.956793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.956817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.957010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.957036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.957194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.957218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.957427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.957452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.957601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.957628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.957814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.957839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.958061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.958088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.958295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.958321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.958574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.958599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.958751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.958775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.958988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.959015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.959203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.959228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.959406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.959431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.959605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.959630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.959805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.959831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.960018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.960044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.960219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.960261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.960462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.960507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.960689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.960714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.960871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.960902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.961076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.961101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.961360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.961404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.961727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.961784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.961950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.961976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.962176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.962204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.962411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.962436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.962613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.962638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.962823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.962848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.963071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.963097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.963249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.805 [2024-07-16 01:02:09.963274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.805 qpair failed and we were unable to recover it. 00:25:35.805 [2024-07-16 01:02:09.963453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.963478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.963721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.963746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.963922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.963948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.964139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.964165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.964361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.964408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.964660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.964715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.964923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.964949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.965130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.965177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.965384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.965428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.965635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.965660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.965836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.965860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.966051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.966078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.966284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.966325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.966568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.966611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.966790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.966816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.967021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.967047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.967261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.967287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.967483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.967512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.967737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.967762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.967975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.968001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.968203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.968228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.968421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.968447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.968652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.968695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.968911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.968936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.969150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.969175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.969324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.969351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.969519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.969547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.969739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.969764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.969963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.970006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.970193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.970218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.970536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.970561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.970746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.970771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.970950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.970976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.971160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.971187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.971407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.971432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.971636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.971662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.971810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.971837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.972070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.972096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.972278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.972303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.972455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.972480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.972662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.806 [2024-07-16 01:02:09.972687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.806 qpair failed and we were unable to recover it. 00:25:35.806 [2024-07-16 01:02:09.972863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.972894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.973046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.973071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.973226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.973251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.973439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.973480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.973714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.973740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.973920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.973946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.974096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.974122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.974344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.974370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.974525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.974551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.974752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.974777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.974952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.974996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.975212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.975237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.975415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.975440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.975625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.975667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.975880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.975906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.976133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.976175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.976375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.976418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.976616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.976658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.976837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.976864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.977126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.977151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.977374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.977399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.977585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.977610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.977764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.977789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.977971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.977997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.978197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.978222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.978375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.978400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.978577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.978602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.978780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.978805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.979010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.979038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.979207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.979233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.979392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.979419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.979597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.979622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.979801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.979827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.980010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.980040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.980210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.980235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.980411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.980438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.980608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.980650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.980829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.980854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.981102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.981146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.981377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.981421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.807 [2024-07-16 01:02:09.981658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.807 [2024-07-16 01:02:09.981701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.807 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.981886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.981912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.982071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.982096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.982329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.982371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.982577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.982602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.982782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.982808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.982961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.982987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.983138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.983164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.983334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.983359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.983540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.983582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.983762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.983788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.983945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.983971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.984144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.984185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.984391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.984416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.984584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.984627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.984780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.984806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.985023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.985048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.985246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.985287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.985526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.985551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.985763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.985788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.985959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.986003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.986210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.986252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.986495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.986537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.986685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.986710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.986891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.986916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.987067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.987092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.987242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.987267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.987450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.987475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.987668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.987696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.987902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.987928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.988107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.988132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.988309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.988334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.988513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.988731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.988777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.988970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.988996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.989148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.989173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.989354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.989379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.989533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.989559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.989713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.989738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.989966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.990009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.990198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.808 [2024-07-16 01:02:09.990225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.808 qpair failed and we were unable to recover it. 00:25:35.808 [2024-07-16 01:02:09.990402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.990428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.990604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.990629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.990790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.990816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.991001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.991044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.991278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.991304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.991487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.991512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.991671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.991696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.991846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.991871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.992078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.992103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.992263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.992288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.992483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.992511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.992679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.992705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.992852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.992883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.993130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.993155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.993325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.993368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.993581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.993607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.993782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.993808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.994029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.994058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.994279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.994321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.994501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.994527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.994730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.994755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.994937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.994963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.995117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.995143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.995366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.995408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.995612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.995654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.995860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.995903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.996085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.996128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.996301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.996343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.996562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.996587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.996793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.996818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.996978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.997006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.997210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.997252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.997464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.997493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.997673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.997699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.997853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.997884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.998085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.998127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.998369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.998395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.998566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.998591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.998761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.998786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.998988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.999034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.999256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.809 [2024-07-16 01:02:09.999282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.809 qpair failed and we were unable to recover it. 00:25:35.809 [2024-07-16 01:02:09.999508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:09.999551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:09.999700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:09.999725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:09.999926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:09.999952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.000164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.000189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.000421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.000448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.000674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.000699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.000852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.000884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.001045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.001082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.001302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.001345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.001519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.001567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.001711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.001737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.001932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.001958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.002135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.002179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.002387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.002430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.002615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.002659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.002864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.002898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.003082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.003108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.003288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.003332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.003557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.003584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.003749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.003775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.003949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.003993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.004193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.004236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.004443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.004487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.004640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.004666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.004818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.004843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.005058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.005102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.005336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.005380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.005603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.005629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.005806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.005831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.006049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.006092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.006315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.006358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.006565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.006613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.006795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.006823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.006980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.007007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.007182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.007226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.810 qpair failed and we were unable to recover it. 00:25:35.810 [2024-07-16 01:02:10.007435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.810 [2024-07-16 01:02:10.007478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.007656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.007701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.007902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.007929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.008107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.008153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.008351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.008394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.008569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.008612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.008767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.008793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.008967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.009010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.009214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.009257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.009422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.009464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.009678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.009722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.009912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.009943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.010148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.010190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.010389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.010432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.010608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.010653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.010836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.010861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.011016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.011041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.011258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.011301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.011498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.011526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.011719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.011746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.011945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.011975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.012155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.012197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.012425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.012468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.012649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.012674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.012865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.012899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.013101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.013144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.013357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.013401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.013603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.013646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.013828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.013853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.014081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.014125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.014302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.014349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.014585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.014628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.014782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.014809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.014990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.015017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.015189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.015232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.015442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.015485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.015661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.015709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.015903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.015929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.016132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.016175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.016382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.016425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.016604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.811 [2024-07-16 01:02:10.016647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.811 qpair failed and we were unable to recover it. 00:25:35.811 [2024-07-16 01:02:10.016829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.016854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.017024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.017050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.017251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.017294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.017469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.017517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.017666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.017692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.017841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.017866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.018066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.018109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.018348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.018390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.018592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.018621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.018797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.018824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.019030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.019073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.019251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.019299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.019505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.019547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.019700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.019725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.019870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.019913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.020098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.020124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.020322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.020364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.020605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.020647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.020802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.020827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.021007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.021033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.021213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.021255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.021484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.021528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.021713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.021756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.021929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.021958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.022162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.022190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.022459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.022503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.022674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.022699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.022891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.022919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.023169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.023213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.023406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.023457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.023688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.023737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.023934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.023967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.024197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.024250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.024452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.024502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.024692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.024723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.024931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.024993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.025229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.025289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.025535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.025584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.025744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.025771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.025951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.025995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.026196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.812 [2024-07-16 01:02:10.026240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.812 qpair failed and we were unable to recover it. 00:25:35.812 [2024-07-16 01:02:10.026446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.026490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.026675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.026701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.026854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.026885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.027094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.027138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.027286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.027313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.027543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.027586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.027736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.027762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.027985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.028029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.028224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.028253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.028490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.028533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.028708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.028733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.028904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.028931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.029167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.029210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.029446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.029489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.029710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.029737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.029936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.029966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.030167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.030209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.030419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.030462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.030645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.030671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.030884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.030911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.031092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.031138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.031312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.031355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.031589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.031632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.031774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.031799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.031954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.031980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.032160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.032202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.032407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.032451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.032641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.032666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.032842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.032869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.033061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.033103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.033281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.033309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.033553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.033597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.033799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.033825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.034009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.034052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.034288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.034335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.034547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.034590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.034795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.034821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.035001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.035044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.035274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.035318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.035517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.035545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.035708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.813 [2024-07-16 01:02:10.035734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.813 qpair failed and we were unable to recover it. 00:25:35.813 [2024-07-16 01:02:10.035894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.035922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.036128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.036157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.036403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.036445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.036689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.036733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.036888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.036914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.037110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.037156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.037363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.037406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.037580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.037624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.037772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.037798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.038028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.038072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.038225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.038252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.038493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.038536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.038695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.038720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.038893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.038919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.039125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.039169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.039396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.039440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.039642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.039685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.039865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.039901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.040060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.040085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.040289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.040331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.040531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.040576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.040783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.040999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.041027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.041232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.041261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.041454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.041482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.041709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.041737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.041950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.041977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.042171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.042199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.042424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.042452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.042696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.042724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.042935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.042961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.043141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.043167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.043390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.043417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.043608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.043636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.043814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.814 [2024-07-16 01:02:10.043842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.814 qpair failed and we were unable to recover it. 00:25:35.814 [2024-07-16 01:02:10.044047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.044072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.044277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.044305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.044512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.044537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.044801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.044829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.045039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.045064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.045302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.045329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.045619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.045668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.045866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.045903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.046102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.046127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.046372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.046397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.046605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.046632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.046858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.046893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.047114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.047143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.047325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.047353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.047544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.047571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.047836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.047864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.048068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.048093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.048300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.048328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.048599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.048651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.048875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.048928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.049085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.049110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.049283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.049308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.049511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.049538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.049704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.049732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.049922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.049948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.050118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.050143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.050354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.050382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.050701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.050750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.050985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.051010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.051154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.051195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.051397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.051422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.051592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.051622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.051820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.051848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.052055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.052081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.052273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.052301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.052497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.052525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.052708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.052736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.052965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.052990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.053182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.053210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.053436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.053465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.053670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.053698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.815 qpair failed and we were unable to recover it. 00:25:35.815 [2024-07-16 01:02:10.053922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.815 [2024-07-16 01:02:10.053952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.054158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.054184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.054357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.054385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.054583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.054610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.054776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.054801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.054967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.054996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.055217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.055245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.055449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.055474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.055647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.055672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.055874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.055908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.056081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.056106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.056287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.056312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.056532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.056573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.056804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.056829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.057033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.057059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.057261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.057288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.057491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.057516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.057721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.057748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.057945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.057974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.058208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.058233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.058445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.058473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.058644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.058673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.058881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.058907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.059116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.059143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.059335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.059362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.059538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.059563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.059792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.059820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.059978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.060007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.060181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.060205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.060432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.060460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.060654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.060682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.060841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.060866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.061072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.061100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.061322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.061349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.061551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.061575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.061796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.061824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.061990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.062019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.062210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.062235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.062434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.062462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.062651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.062683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.062957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.062983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.063203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.816 [2024-07-16 01:02:10.063231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.816 qpair failed and we were unable to recover it. 00:25:35.816 [2024-07-16 01:02:10.063396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.063424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.063592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.063617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.063807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.063834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.064046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.064072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.064246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.064271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.064495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.064523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.064712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.064740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.064974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.065001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.065183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.065209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.065424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.065449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.065618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.065643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.065818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.065846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.066049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.066078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.066282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.066308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.066481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.066511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.066706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.066734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.066902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.066928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.067154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.067182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.067377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.067404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.067602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.067627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.067831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.067859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.068059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.068087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.068260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.068285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.068499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.068539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.068732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.068760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.068968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.068994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.069193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.069221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.069388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.069416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.069607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.069632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.069856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.069891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.070057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.070086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.070255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.070280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.070475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.070503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.070724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.070752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.070921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.070946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.071143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.071170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.071343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.071370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.071565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.071590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.071782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.071810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.071995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.072024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.072224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.072249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.817 qpair failed and we were unable to recover it. 00:25:35.817 [2024-07-16 01:02:10.072425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.817 [2024-07-16 01:02:10.072452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.072616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.072643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.072814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.072838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.073043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.073069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.073248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.073276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.073470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.073494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.073667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.073694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.073923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.073949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.074154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.074178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.074391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.074416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.074622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.074647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.074831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.074856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.075037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.075063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.075264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.075292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.075503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.075528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.075679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.075704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.075888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.075914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.076100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.076125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.076316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.076344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.076569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.076596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.076783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.076808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.077009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.077038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.077255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.077282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.077483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.077507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.077678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.077710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.077901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.077938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.078137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.078162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.078366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.078394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.078565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.078593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.078771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.078796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.078989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.079017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.079174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.079202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.079422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.079447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.079645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.079672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.079840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.079867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.080072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.080097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.818 qpair failed and we were unable to recover it. 00:25:35.818 [2024-07-16 01:02:10.080270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.818 [2024-07-16 01:02:10.080297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.080460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.080487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.080656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.080681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.080894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.080924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.081114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.081142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.081337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.081361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.081513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.081539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.081766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.081794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.082005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.082032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.082257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.082285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.082465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.082493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.082693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.082718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.082968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.082994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.083151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.083176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.083353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.083380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.083578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.083605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.083813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.083841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.084054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.084079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.084279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.084307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.084530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.084558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.084778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.084802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.084995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.085024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.085224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.085251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.085444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.085469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.085667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.085695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.085854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.085889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.086116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.086141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.086340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.086368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.086568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.086594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.086798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.086823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.087049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.087078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.087273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.087301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.087520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.087544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.087757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.087785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.088004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.088032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.088233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.088259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.088462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.088490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.088709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.088736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.088908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.088944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.089119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.089147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.089340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.089369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.819 [2024-07-16 01:02:10.089542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-16 01:02:10.089567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.819 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.089772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.089797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.090040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.090069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.090247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.090272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.090431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.090455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.090637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.090662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.090834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.090859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.091079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.091107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.091270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.091298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.091476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.091501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.091704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.091733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.091935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.091964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.092188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.092213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.092422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.092450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.092645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.092672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.092848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.092884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.093087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.093115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.093308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.093335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.093502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.093526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.093690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.093718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.093914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.093942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.094143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.094168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.094338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.094366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.094565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.094593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.094761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.094785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.094954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.094982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.095150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.095178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.095346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.095371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.095596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.095624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.095827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.095855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.096067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.096093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.096319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.096347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.096582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.096607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.096810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.096835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.097009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.097038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.097243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.097269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.097464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.097489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.097664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.097693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.097894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.097923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.098110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.098135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.098282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.098307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.098488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-16 01:02:10.098514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.820 qpair failed and we were unable to recover it. 00:25:35.820 [2024-07-16 01:02:10.098693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.098718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.098888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.098917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.099107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.099135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.099334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.099359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.099582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.099611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.099803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.099831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.100018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.100045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.100232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.100260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.100482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.100510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.100735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.100760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.100965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.100993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.101222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.101249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.101459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.101484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.101686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.101714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.101911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.101944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.102147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.102172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.102396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.102424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.102576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.102604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.102825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.103024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.103050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.103219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.103247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.103446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.103471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.103677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.103705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.103925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.103954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.104181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.104206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.104410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.104437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.104637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.104664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.104838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.104863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.105035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.105060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.105232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.105257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.105458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.105483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.105663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.105688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.105828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.105853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.106018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.106043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.106247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.106274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.106469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.106496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.106688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.106715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.106922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.106948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.107144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.107171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.107414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.107442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.107607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.107634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.821 [2024-07-16 01:02:10.107834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-16 01:02:10.107863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.821 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.108081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.108109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.108375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.108423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.108644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.108671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.108901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.108926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.109138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.109166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.109497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.109542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.109773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.109801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.110029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.110054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.110258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.110286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.110672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.110731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.110946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.110975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.111175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.111200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.111395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.111422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.111650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.111678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.111857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.111891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.112067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.112091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.112318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.112346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.112597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.112646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.112871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.112917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.113139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.113164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.113342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.113369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.113543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.113571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.113740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.113768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.113970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.113995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.114192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.114220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.114395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.114422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.114582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.114610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.114810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.114837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.115047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.115072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.115273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.115334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.115527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.115555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.115761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.115785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.116020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.116048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.116367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.116425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.116626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.116653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.822 qpair failed and we were unable to recover it. 00:25:35.822 [2024-07-16 01:02:10.116837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-16 01:02:10.116862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.117077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.117105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.117392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.117440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.117661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.117689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.117869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.117900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.118128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.118161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.118487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.118535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.118725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.118752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.118951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.118976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.119198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.119225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.119454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.119481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.119679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.119707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.119886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.119911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.120079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.120107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.120271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.120299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.120495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.120522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.120730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.120755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.120964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.121013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.121248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.121299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.121501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.121529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.121702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.121726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.121882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.121925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.122141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.122169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.122367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.122394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.122589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.122614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.122774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.122799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.122972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.122997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.123175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.123202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.123410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.123434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.123609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.123634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.123809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.123836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.124046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.124071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.124274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.124303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.124512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.124540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.124729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.124756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.124954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.124980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.125150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.125176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.125380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.125408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.125665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.125717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.125919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.125947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.126179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.126203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.823 [2024-07-16 01:02:10.126383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.823 [2024-07-16 01:02:10.126411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.823 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.126609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.126637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.126833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.126861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.127067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.127092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.127284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.127312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.127580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.127630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.127822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.127849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.128051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.128076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.128231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.128259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.128425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.128452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.128650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.128677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.128845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.128870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.129085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.129113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.129336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.129364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.129567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.129595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.129830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.129854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.130044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.130070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.130304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.130353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.130585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.130613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.130848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.130873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.131088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.131116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.131457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.131502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.131692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.131720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.131922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.131947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.132155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.132183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.132405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.132433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.132630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.132657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.132886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.132911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.133110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.133138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.133457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.133508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.133742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.133767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.133920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.133945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.134182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.134214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.134512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.134568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.134776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.134800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.134976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.135001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.135173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.135200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.135442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.135491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.135726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.135754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.135955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.135980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.136183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.136211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.136471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.824 [2024-07-16 01:02:10.136523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.824 qpair failed and we were unable to recover it. 00:25:35.824 [2024-07-16 01:02:10.136740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.136767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.136993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.137019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.137217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.137244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.137440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.137468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.137675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.137703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.137906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.137930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.138131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.138158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.138357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.138385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.138548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.138575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.138774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.138799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.138980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.139005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.139201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.139228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.139425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.139452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.139649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.139674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.139883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.139911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.140127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.140155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.140353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.140381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.140581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.140610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.140834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.140862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.141102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.141130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.141357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.141384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.141589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.141614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.141838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.141865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.142071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.142099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.142292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.142320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.142498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.142522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.142723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.142747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.142953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.142982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.143152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.143179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.143401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.143426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.143665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.143692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.143861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.143896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.144057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.144084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.144264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.144289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.144485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.144513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.144814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.144873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.145115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.145143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.145344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.145369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.145600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.145627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.145847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.145874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.146056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.146084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.146285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.825 [2024-07-16 01:02:10.146309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.825 qpair failed and we were unable to recover it. 00:25:35.825 [2024-07-16 01:02:10.146464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.146489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.146665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.146690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.146901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.146929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.147134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.147159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.147330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.147357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.147577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.147636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.147869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.147900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.148109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.148134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.148330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.148358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.148590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.148617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.148837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.148864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.149069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.149093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.149251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.149278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.149518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.149570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.149739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.149766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.149940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.149966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.150193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.150226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.150508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.150564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.150758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.150785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.150989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.151015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.151218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.151246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.151552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.151610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.151830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.151854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.152040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.152065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.152288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.152315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.152628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.152678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.152901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.152930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.153133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.153159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.153367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.153395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.153686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.153737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.153974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.154002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.154205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.154230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.154469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.154493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.154655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.154680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.154838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.154865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.155048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.155073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.155240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.155267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.155532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.155560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.155778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.155805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.156040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.156066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.156267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.156294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.826 qpair failed and we were unable to recover it. 00:25:35.826 [2024-07-16 01:02:10.156608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.826 [2024-07-16 01:02:10.156666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.156895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.156923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.157101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.157126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.157354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.157382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.157552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.157579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.157801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.157829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.158020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.158045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.158257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.158285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.158616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.158670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.158869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.158904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.159104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.159130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.159356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.159384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.159710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.159763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.159958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.159986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.160172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.160197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.160409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.160436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.160693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.160721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.160946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.160975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.161152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.161177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.161367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.161392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.161591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.161645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.161840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.161868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.162069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.162094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.162293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.162322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.162582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.162803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.162831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.163031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.163056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.163229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.163257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.163482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.163509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.163701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.163729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.163934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.163960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.164136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.164164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.164451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.164505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.164679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.164706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.164913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.164938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.165147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.827 [2024-07-16 01:02:10.165175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.827 qpair failed and we were unable to recover it. 00:25:35.827 [2024-07-16 01:02:10.165448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.165476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.165685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.165712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.165892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.165917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.166098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.166123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.166276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.166301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.166472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.166496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.166646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.166670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.166883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.166915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.167082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.167109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.167275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.167303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.167527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.167552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.167919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.167948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.168165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.168190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.168401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.168428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.168622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.168647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.168841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.168868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.169064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.169092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.169263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.169291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.169512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.169536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.169687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.169711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.169908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.169936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.170141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.170169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.170396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.170421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.170595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.170622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.170821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.170848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.171047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.171075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.171262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.171287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.171507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.171534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.171779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.171806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.171970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.171999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.172223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.172248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.172455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.172483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.172815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.172887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.173089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.173116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.173295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.173320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.173552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.173580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.173778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.173805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.174003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.174031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.174265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.174290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.174490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.174518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.174908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.174936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.828 qpair failed and we were unable to recover it. 00:25:35.828 [2024-07-16 01:02:10.175161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.828 [2024-07-16 01:02:10.175189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.175409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.175434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.175625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.175652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.175849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.175884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.176084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.176112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.176343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.176368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.176561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.176589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.176781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.176815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.177017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.177045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.177218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.177243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.177475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.177502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.177810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.177870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.178056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.178081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.178227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.178251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.178445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.178473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.178729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.178775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.178999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.179028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.179205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.179230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.179422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.179450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.179810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.179874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.180081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.180109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.180280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.180305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.180529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.180557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.180754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.180782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.180982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.181011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.181231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.181256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.181485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.181512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.181755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.181805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.181995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.182024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.182226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.182250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.182475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.182502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.182784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.182832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.183062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.183090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.183289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.183314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.183542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.183575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.183746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.183775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.183988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.184016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.184189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.184214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.184439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.184467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.184686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.184714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.184911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.184940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.829 [2024-07-16 01:02:10.185117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.829 [2024-07-16 01:02:10.185142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.829 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.185347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.185374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.185567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.185594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.185766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.185793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.185972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.185998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.186195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.186223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.186419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.186446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.186621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.186648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.186833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.186861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.187068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.187093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.187272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.187297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.187452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.187477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.187648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.187673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.187904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.187932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.188095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.188124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.188289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.188317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.188495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.188520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.188693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.188721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.188903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.188932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.189125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.189153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.189316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.189341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.189542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.189569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.189761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.189789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.189991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.190019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.190190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.190214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.190386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.190414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.190609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.190638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.190831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.190858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.191078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.191103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.191299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.191326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.191615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.191673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.191864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.191903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.192101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.192126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.192281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.192306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.192453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.192482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.192682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.192710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.192910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.192936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.193107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.193131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.193332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.193359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.193556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.193583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.193784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.193809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.194011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.830 [2024-07-16 01:02:10.194040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.830 qpair failed and we were unable to recover it. 00:25:35.830 [2024-07-16 01:02:10.194219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.194246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.194416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.194443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.194626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.194650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.194855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.194890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.195098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.195123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.195279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.195304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.195510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.195536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.195695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.195723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.195931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.195956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.196135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.196176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.196353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.196378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.196530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.196555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.196761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.196788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.196995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.197021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.197168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.197193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.197392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.197420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.197623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.197648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.197842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.197866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.198067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.198092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.198296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.198328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.198607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.198635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.198822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.198850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.199041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.199067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.199232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.199260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.199461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.199488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.199689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.199716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.199892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.199919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.200147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.200174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.200464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.200517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.200688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.200717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.200892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.200918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.201120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.201149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.201340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.201367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.201565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.201592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.201771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.201796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.201972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.831 [2024-07-16 01:02:10.201997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.831 qpair failed and we were unable to recover it. 00:25:35.831 [2024-07-16 01:02:10.202203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.202230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.202404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.202431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.202627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.202652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.202824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.202854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.203059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.203085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.203237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.203262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.203421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.203446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.203654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.203681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.203872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.204012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.204211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.204238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.204414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.204438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.204628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.204652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.204827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.204855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.205041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.205068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.205233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.205258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.205411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.205454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.205644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.205671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.205866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.205903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.206078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.206103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.206286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.206310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.206479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.206504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.206713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.206740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.206935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.206961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.207156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.207184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.207344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.207376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.207574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.207599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.207771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.207795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.207970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.207998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.208191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.208218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.208410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.208438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.208631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.208656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.208857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.208891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.209100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.209124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.209288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.209311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.209466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.209490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.209688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.209712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.209911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.209938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.210099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.210125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.210299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.210322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.210519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.210545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.210766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.210792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.832 qpair failed and we were unable to recover it. 00:25:35.832 [2024-07-16 01:02:10.210968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.832 [2024-07-16 01:02:10.210996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.211193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.211218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.211412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.211439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.211645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.211670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.211824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.211849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.212032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.212058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.212228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.212256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.212474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.212527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.212751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.212779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.212973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.212998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.213204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.213236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.213431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.213458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.213648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.213675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.213884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.213910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.214076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.214104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.214322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.214350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.214520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.214548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.214739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.214764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.214959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.214988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.215213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.215241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.215434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.215462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.215634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.215659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.215865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.215913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.216108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.216136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.216327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.216355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.216536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.216561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.216760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.216788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.216960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.216987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.217156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.217184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.217382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.217407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.217603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.217631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.217853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.217887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.218078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.218106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.218329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.218354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.218528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.218555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.218728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.218756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.218943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.218972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.219154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.219179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.219408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.219436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.219607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.219634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.219829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.219857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.220040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.833 [2024-07-16 01:02:10.220065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.833 qpair failed and we were unable to recover it. 00:25:35.833 [2024-07-16 01:02:10.220262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.220289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.220501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.220549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.220746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.220773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.221002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.221028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.221241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.221266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.221411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.221436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.221611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.221638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.221840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.221865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.222043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.222071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.222275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.222306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.222457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.222481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.222682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.222707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.222923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.222948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.223133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.223158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.223327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.223354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.223558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.223584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.223755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.223783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.223956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.223984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.224174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.224202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.224400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.224425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.224624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.224651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.224839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.224867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.225081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.225109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.225343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.225368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.225538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.225566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.225729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.225756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.225975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.226004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.226170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.226195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.226362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.226389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.226612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.226640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.226807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.226835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.227057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.227083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.227263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.227292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.227485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.227520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.227723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.227751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.227955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.227980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.228128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.228154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.228361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.228386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.228569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.228596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.228795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.228820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.229033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.229061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.834 qpair failed and we were unable to recover it. 00:25:35.834 [2024-07-16 01:02:10.229249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.834 [2024-07-16 01:02:10.229277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.229464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.229492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.229689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.229714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.229887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.229917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.230125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.230150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.230376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.230403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.230572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.230597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.230794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.230822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.230994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.231022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.231242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.231270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.231499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.231524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.231725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.231753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.231962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.231991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.232162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.232190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.232390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.232416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.232638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.232666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.232865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.232898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.233098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.233126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.233333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.233358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.233540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.233565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.233741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.233767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.233924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.233949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.234094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.234119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.234352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.234380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.234594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.234641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.234837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.234865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.235073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.235098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.235269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.235297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.235492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.235520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.235717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.235744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.235911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.235937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.236135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.236163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.236353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.236381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.236547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.236575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.236783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.236808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.237013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.835 [2024-07-16 01:02:10.237041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.835 qpair failed and we were unable to recover it. 00:25:35.835 [2024-07-16 01:02:10.237269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.237301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.237499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.237527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.237704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.237728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.237954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.237982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.238199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.238243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.238412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.238439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.238636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.238660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.238848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.238894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.239121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.239149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.239366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.239393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.239568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.239594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.239770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.239798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.239968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.239997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.240217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.240245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.240447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.240472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.240645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.240673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.240863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.240898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.241072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.241100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.241272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.241296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.241496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.241523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.241716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.241744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.241952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.241977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.242131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.242156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.242326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.242354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.242549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.242574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.242797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.242825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.243024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.243049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.243248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.243275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.243481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.243525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.243719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.243747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.243949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.243975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.244174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.244202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.244396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.244424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.244616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.244643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.244841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.244866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.245236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.245263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.245484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.245511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.245674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.245702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.245901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.245926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.246100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.246128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.246323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.836 [2024-07-16 01:02:10.246351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.836 qpair failed and we were unable to recover it. 00:25:35.836 [2024-07-16 01:02:10.246550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.246578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.246774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.246799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.246960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.246986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.247156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.247184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.247380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.247404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.247581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.247606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.247782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.247806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.247999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.248026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.248210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.248237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.248408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.248432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.248613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.248638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.248815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.248840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.249027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.249054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.249249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.249274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.249446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.249472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.249646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.249672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.249866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.249902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.250071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.250095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.250291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.250318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.250516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.250541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.250737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.250763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.250995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.251021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.251188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.251215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.251433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.251460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.251649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.251675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.251849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.251874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.252107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.252133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.252320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.252352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.252561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.252587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.252789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.252813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.253024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.253050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.253248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.253273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.253426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.253468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.253665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.253690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.253850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.253881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.254065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.254090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.254285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.254312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.254509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.254534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.254687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.254712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.254913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.254940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.255130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.837 [2024-07-16 01:02:10.255157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.837 qpair failed and we were unable to recover it. 00:25:35.837 [2024-07-16 01:02:10.255341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.255366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.255564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.255591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.255784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.255809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.256007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.256035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.256225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.256250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.256417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.256444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.256641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.256668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.256888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.256914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.257063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.257087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.257278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.257303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.257517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.257541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.257766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.257792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.258014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.258040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.258231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.258256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.258447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.258473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.258657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.258683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.258885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.258910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.259078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.259104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.259290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.259316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.259496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.259522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.259711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.259736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.259959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.259985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.260142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.260167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.260321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.260346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.260518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.260543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.260722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.260747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.260912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.260938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.261091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.261122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.261285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.261310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.261516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.261541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.261725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.261751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.261910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.261952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.262110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.262135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.262313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.262337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.262555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.262580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.262762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.262787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.262967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.262993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.263173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.263199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.263344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.263368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.263513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.263538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.263738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.263763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.838 qpair failed and we were unable to recover it. 00:25:35.838 [2024-07-16 01:02:10.263921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.838 [2024-07-16 01:02:10.263947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.264127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.264152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.264317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.264342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.264503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.264528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.264676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.264701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.264882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.264907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.265081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.265106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.265283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.265308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.265510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.265534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.265712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.265737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.265912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.265937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.266096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.266121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.266272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.266297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.266467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.266498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.266653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.266678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.266829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.266854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.267043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.267068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.267221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.267245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.267423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.267448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.267628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.267653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.267800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.267824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.267977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.268003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.268210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.268235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.268389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.268414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.268592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.268617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.268776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.268801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.268973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.268999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.269181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.269206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.269409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.269434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.269613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.269638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.269815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.269839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.270047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.270073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.270261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.270286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.270447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.270472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.270627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.270652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.270839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.270864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.271057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.271083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.271259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.271283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.271468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.271492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.271640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.271664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.839 qpair failed and we were unable to recover it. 00:25:35.839 [2024-07-16 01:02:10.271846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.839 [2024-07-16 01:02:10.271871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.272092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.272117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.272272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.272298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.272453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.272478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.272678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.272703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.272861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.272894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.273044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.273069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.273283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.273308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.273451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.273476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.273677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.273702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.273851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.273884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.274037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.274062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.274236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.274261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.274444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.274469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.274650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.274679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.274828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.274853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.275087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.275113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.275286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.275311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.275479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.275504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.275685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.275710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.275892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.275917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.276121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.276146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.276292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.276318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.276507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.276532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.276680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.276705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.276889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.276914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.277113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.277137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.277294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.277318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.277499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.277524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.277701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.277726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.277910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.277935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.278109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.278134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.278338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.278363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.278543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.278568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.278744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.278769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.278947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.840 [2024-07-16 01:02:10.278972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.840 qpair failed and we were unable to recover it. 00:25:35.840 [2024-07-16 01:02:10.279144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.279169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.279325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.279350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.279548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.279575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.279799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.279824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.279977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.280003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.280202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.280234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.280402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.280430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.280621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.280646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.280844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.280872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.281067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.281095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.281321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.281348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.281550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.281575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.281808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.281836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.282033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.282061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.282231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.282259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.282492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.282517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.282688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.282715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.282943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.282971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.283171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.283200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.283412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.283437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.283662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.283690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.283887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.283916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.284090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.284117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.284289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.284314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.284513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.284540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.284734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.284759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.284958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.284987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.285163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.285188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.285361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.285385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.285606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.285633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.285835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.285860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.286073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.286098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.286298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.286322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.286570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.286595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.286823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.286851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.287056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.287081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.287274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.287302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.287495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.287524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.287745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.287773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.287983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.288009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.288212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.288239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.288501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.288551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.288771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.288799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.841 [2024-07-16 01:02:10.288984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.841 [2024-07-16 01:02:10.289009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.841 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.289208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.289238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.289519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.289568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.289760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.289792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.289976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.290002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.290197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.290226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.290550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.290600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.290773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.290802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.290999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.291025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.291221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.291248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.291517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.291545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.291735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.291762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.292001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.292026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.292176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.292201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.292386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.292411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.292600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.292627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.292824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.292849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.293037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.293062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.293245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.293270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.293467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.293495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.293727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.293752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.293931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.293959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.294133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.294161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.294359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.294387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.294583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.294609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.294811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.294839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.295042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.295071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.295268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.295296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.295483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.295508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.295708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.295735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.295993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.296047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.296245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.296270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.296469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.296494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.296695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.296722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.296932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.296957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.297137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.297162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.297310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.297335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.297562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.297590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.297810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.297837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.298013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.298041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.298269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.298294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.298502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.298529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.298757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.298810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.299048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.299074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.842 [2024-07-16 01:02:10.299228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.842 [2024-07-16 01:02:10.299253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.842 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.299407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.299432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.299635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.299659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.299838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.299863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.300025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.300051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.300224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.300253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.300450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.300478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.300669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.300697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.300871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.300902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.301129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.301156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.301339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.301363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.301554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.301582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.301765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.301790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.301995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.302020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.302309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.302334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.302510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.302535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.302717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.302742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.302972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.302998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.303184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.303209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.303406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.303433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.303628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.303652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.303854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.303887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.304084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.304112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.304301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.304329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.304510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.304536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.304741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.304769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.304939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.304967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.305167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.305199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.305390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.305415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.305610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.305637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.305855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.305888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.306094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.306122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.306288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.306312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.306541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.306569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.306743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.306771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.306965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.306993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.307198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.307223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.307422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.307451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.307649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.307677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.307908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.307933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.308136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.308161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.308360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.308389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.308590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.308615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.308812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.308837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.309060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.843 [2024-07-16 01:02:10.309086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.843 qpair failed and we were unable to recover it. 00:25:35.843 [2024-07-16 01:02:10.309254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.309282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.309505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.309532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.309724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.309751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.309949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.309975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.310168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.310196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.310470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.310517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.310709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.310737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.310929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.310955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.311157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.311182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.311358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.311382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.311568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.311596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.311826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.311850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.312030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.312055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.312258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.312283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.312457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.312481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.312681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.312706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.312898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.312926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.313225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.313279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.313481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.313509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.313711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.313737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.313941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.313969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.314213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.314259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.314492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.314532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.314705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.314730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.314933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.314958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.315150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.315177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.315364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.315392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.315609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.315634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.315872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.315907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.316086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.316115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.316313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.316341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.316517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.316542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.316737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.316765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.317024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.317075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.317270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.317297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.317500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.317525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.317730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.317757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.317960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.844 [2024-07-16 01:02:10.317988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.844 qpair failed and we were unable to recover it. 00:25:35.844 [2024-07-16 01:02:10.318184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.318211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.318417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.318442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.318661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.318688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.318884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.318913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.319108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.319135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.319358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.319383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.319588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.319616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.319809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.319837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.320014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.320043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.320254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.320279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.320494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.320522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.320714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.320742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.320976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.321006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.321183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.321208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.321414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.321442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.321666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.321694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.321910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.321938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.322107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.322132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.322332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.322360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.322541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.322568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.322739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.322766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.322963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.322988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.323184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.323212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.323475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.323523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.323693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.323721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.323893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.323919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.324123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.324151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.324414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.324464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.324692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.324719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.845 qpair failed and we were unable to recover it. 00:25:35.845 [2024-07-16 01:02:10.324950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.845 [2024-07-16 01:02:10.324976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.325219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.325247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.325506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.325555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.325780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.325807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.325981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.326007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.326176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.326203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.326458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.326507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.326731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.326759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.326960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.326985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.327183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.327211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.327406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.327434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.327632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.327659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.327848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.327872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.328038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.328063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.328255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.328283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.328475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.328502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.328732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.328757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.328936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.328972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.329155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.329181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.329353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.329380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.329582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.329607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.329827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.329854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.330084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.330112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.330276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.330304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.330520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.330548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.330727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.330755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.330959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.330984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.331212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.331240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.331434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.331459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.331617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.331642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.331821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.331846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.332062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.332090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.332253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.332278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.332500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.332528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.332696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.332723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.332930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.332958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.333189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.333214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.333396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.333421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.333603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.333632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.333831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.333856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.334043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.334068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.846 qpair failed and we were unable to recover it. 00:25:35.846 [2024-07-16 01:02:10.334262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.846 [2024-07-16 01:02:10.334289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.334518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.334567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.334790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.334817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.335022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.335047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.335274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.335301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.335623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.335675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.335847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.335891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.336100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.336125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.336328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.336356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.336683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.336734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.336958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.336991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.337190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.337216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.337383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.337411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.337673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.337723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.337939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.337967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.338165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.338189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.338416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.338444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.338666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.338694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.338866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.338899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.339126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.339150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.339379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.339406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.339573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.339601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.339794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.339823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.340006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.340032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.340219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.340244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.340509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.340560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.340755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.340782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.340979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.341005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.341201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.341228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.341456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.341481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.341681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.341706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.341900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.341925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.342104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.342128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.342419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.342469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.342667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.342695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.342873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.342904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.343109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.343137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.343356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.343384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.343595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.343623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.343797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.343822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.344023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.344051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.847 qpair failed and we were unable to recover it. 00:25:35.847 [2024-07-16 01:02:10.344252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.847 [2024-07-16 01:02:10.344279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.344479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.344507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.344669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.344694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.344902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.344931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.345101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.345129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.345361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.345386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.345564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.345589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.345787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.345814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.345971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.345999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.346205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.346230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.346433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.346461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.346631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.346659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.346887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.346916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.347114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.347142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.347339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.347364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.347571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.347599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.347828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.347853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.348041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.348066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.348256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.348281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.348483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.348511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.348704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.348732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.348925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.348954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.349183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.349208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.349391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.349418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.349656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.349680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.349886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.349914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.350115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.350140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.350312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.350339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.350497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.350525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.350749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.350777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.350961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.350987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.351185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.351213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.351433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.848 [2024-07-16 01:02:10.351461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.848 qpair failed and we were unable to recover it. 00:25:35.848 [2024-07-16 01:02:10.351658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.351683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.351895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.351920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.352121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.352148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.352438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.352493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.352699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.352731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.352939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.352965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.353118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.353143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.353396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.353447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.353669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.353697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.353889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.353915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.354116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.354143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.354303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.354331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.354525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.354554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.354784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.354810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.355009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.355037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.355282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.355327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.355521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.355548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.355771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.355796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.355972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.356001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.356190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.356218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.356414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.356439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.356649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.356674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.356903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.356931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.357161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.357185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.357397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.357425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.357641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.357666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.357831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.357859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.358062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.358090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.358314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.358341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.358546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.358571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.358793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.358821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.359021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.359050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.359230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.359258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.359483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.359507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.359686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.359713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.359913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.359941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.360109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.360136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.360328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.360353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.360549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.360576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.360767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.360795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.360996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.849 [2024-07-16 01:02:10.361022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.849 qpair failed and we were unable to recover it. 00:25:35.849 [2024-07-16 01:02:10.361203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.361227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.361449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.361477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.361768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.361817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.362035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.362063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.362222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.362251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.362425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.362450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.362739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.362790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.362988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.363016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.363210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.363234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.363389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.363414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.363614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.363642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.363835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.363862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.364049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.364074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.364270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.364298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.364564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.364612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.364808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.364835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.365078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.365103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.365342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.365370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.365657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.365707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.365930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.365955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.366161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.366186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.366387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.366415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.366610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.366637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.366854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.366887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.367124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.367149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.367325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.367352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.367519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.367549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.367751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.367779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.367952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.367978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.368178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.368205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.368484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.368511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.368679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.368711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.368914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.368939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.369177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.369205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.369489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.369538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.369727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.369754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.369946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.369971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.370150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.370178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.370367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.370394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.370591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.370618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.850 [2024-07-16 01:02:10.370839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.850 [2024-07-16 01:02:10.370864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.850 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.371035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.371062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.371323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.371372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.371592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.371620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.371825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.371849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.372063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.372088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.372285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.372313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.372514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.372541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.372740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.372765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.372965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.372993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.373275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.373325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.373526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.373555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.373781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.373805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.373988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.374016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.374220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.374245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.374402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.374427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.374604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.374629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.374827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.374855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.375036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.375063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.375268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.375295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.375470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.375495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.375694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.375721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.375891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.375920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.376105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.376132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.376325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.376350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.376550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.376578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.376744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.376773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.376975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.377003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.377177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.377201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.377400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.377429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.377758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.377817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.377997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.378026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.378222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.378251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.378426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.378454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.378737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.378786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.378980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.379009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.379211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.379235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.379436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.379464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.379744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.379795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.379971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.379998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.380202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.380227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.851 [2024-07-16 01:02:10.380420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.851 [2024-07-16 01:02:10.380448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.851 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.380692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.380743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.380966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.380995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.381195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.381220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.381420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.381447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.381769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.381827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.382008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.382035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.382212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.382237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.382432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.382460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.382684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.382732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.382954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.382980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.383180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.383205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.383433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.383460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.383757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.383808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.384031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.384058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.384253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.384278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.384472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.384500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.384666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.384693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.384899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.384925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.385083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.385108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.385314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.385354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.385586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.385637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.385833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.385860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.386068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.386093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.386274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.386298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.386533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.386584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.386760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.386787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.386995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.387021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.387218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.387245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.387500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.387549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.387782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.387810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.388035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.388060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.388237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.388265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.388531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.388580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.388768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.388795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.388974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.388999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.389199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.389228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.389518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.389573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.389769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.389797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.389973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.389999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.390204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.390232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.852 [2024-07-16 01:02:10.390523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.852 [2024-07-16 01:02:10.390571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.852 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.390747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.390772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.390973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.390999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.391201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.391228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.391479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.391528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.391756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.391783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.391987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.392012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.392238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.392266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.392504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.392529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.392723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.392750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.392931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.392958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.393160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.393188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.393464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.393515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.393706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.393734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.393936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.393962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.394141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.394166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.394417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.394466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.394658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.394686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.394911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.394941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.395137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.395165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.395395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.395447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.395644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.395672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.395848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.395873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.396058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.396302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.396330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.396504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.396531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.396721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.396746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.396927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.396957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.397250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.397298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.397494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.397522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.397752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.397776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.398009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.398037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.398312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.398362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.398540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.398567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.398788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.398813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.399031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.853 [2024-07-16 01:02:10.399059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.853 qpair failed and we were unable to recover it. 00:25:35.853 [2024-07-16 01:02:10.399334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.399384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.399586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.399613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.399788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.399813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.400011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.400039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.400235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.400263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.400470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.400498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.400690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.400715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.400914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.400954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.401111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.401136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.401347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.401375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.401578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.401603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.401806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.401834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.402041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.402069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.402270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.402298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.402495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.402520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.402748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.402775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.402969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.402998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.403224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.403249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.403423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.403448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.403667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.403695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.403894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.403922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.404114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.404141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.404377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.404402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.404605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.404633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.404830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.404858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.405102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.405130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.405333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.405358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.405581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.405609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.405830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.405855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.406066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.406094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.406296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.406321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.406513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.406537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.406854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.406915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.407135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.407163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.407361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.407385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.407575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.407602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.407826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.407854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.408075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.408101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.408254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.408279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.408458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.408483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.854 [2024-07-16 01:02:10.408741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.854 [2024-07-16 01:02:10.408789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.854 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.408989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.409017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.409185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.409210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.409401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.409429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.409727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.409780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.409998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.410027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.410252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.410277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.410446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.410473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.410666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.410694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.410907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.410935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.411128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.411157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.411319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.411347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.411545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.411569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.411795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.411823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.412055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.412080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.412256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.412284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.412572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.412626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.412846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.412873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.413081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.413106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.413315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.413343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.413541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.413569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.413787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.413814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.414015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.414041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.414247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.414275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.414508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.414556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.414778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.414806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.414996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.415021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.415250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.415277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.415520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.415570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.415739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.415766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.415967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.415994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.416222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.416250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.416508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.416559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.416780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.416805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.416983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.417008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.417236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.417264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.417517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.417566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.417790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.417817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.418050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.418075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.418257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.418285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.418540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.855 [2024-07-16 01:02:10.418591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.855 qpair failed and we were unable to recover it. 00:25:35.855 [2024-07-16 01:02:10.418765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.418794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.419002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.419027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.419224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.419252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.419530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.419558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.419781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.419808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.420008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.420033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.420233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.420261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.420493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.420521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.420719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.420747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.420976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.421001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.421168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.421200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.421453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.421505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.421674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.421702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.421925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.421951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.422155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.422183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.422349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.422377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.422577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.422604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.422836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.422861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.423059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.423086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.423403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.423460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.423670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.423697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.423897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.423922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.424124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.424152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.424432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.424480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.424676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.424704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.424948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.424973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.425112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.425136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.425369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.425396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.425589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.425617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.425813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.425840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.426082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.426108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.426266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.426291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.426445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.426469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.426647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.426671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.426901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.426930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.427183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.427211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.427369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.427395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.427628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.427657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.427834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.427862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.428066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.428094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.428314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.428342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.856 qpair failed and we were unable to recover it. 00:25:35.856 [2024-07-16 01:02:10.428547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.856 [2024-07-16 01:02:10.428573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.428743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.428771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.428955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.428984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.429179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.429207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.429406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.429431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.429605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.429633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.429802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.429832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.430011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.430040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.430238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.430263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.430420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.430445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.430624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.430649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.430889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.430918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.431141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.431166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.431342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.431370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.431561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.431588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.431751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.431780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.432012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.432038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.432269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.432297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.432587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.432643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.432851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.432881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.433067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.433092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.433266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.433293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.433521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.433546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.433689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.433714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.433897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.433922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.434102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.434126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.434300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.434324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.434472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.434497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.434701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.434726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.434931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.434959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.435177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.435236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.435433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.435460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.435672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.435697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.435865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.435904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.436097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.436125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.857 qpair failed and we were unable to recover it. 00:25:35.857 [2024-07-16 01:02:10.436286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.857 [2024-07-16 01:02:10.436313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.436511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.436535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.436710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.436743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.436969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.436997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.437158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.437185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.437375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.437400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.437604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.437633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.437830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.437857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.438064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.438092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.438289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.438314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.438510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.438537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.438768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.438795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.438982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.439010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.439179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.439204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.439401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.439429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.439625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.439652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.439850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.439884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.440051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.440076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.440302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.440330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.440534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.440560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.440764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.440792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.440986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.441011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.441191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.441219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.441410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.441438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.441605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.441633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.441829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.441854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.442041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.442067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.442279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.442306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.442497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.442525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.442716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.442745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.442941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.442969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.443190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.443218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.443415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.443442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.443647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.443671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.443901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.443928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.444094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.444122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.444314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.444341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.444520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.444545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.444697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.444722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.444902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.444931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.445121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.858 [2024-07-16 01:02:10.445149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.858 qpair failed and we were unable to recover it. 00:25:35.858 [2024-07-16 01:02:10.445311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.445336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.445534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.445562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.445747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.445772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.445968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.445996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.446169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.446194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.446358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.446385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.446638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.446689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.446899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.446928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.447129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.447153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.447352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.447380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.447615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.447643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.447835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.447863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.448092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.448117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.448290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.448318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.448617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.448673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.448868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.448907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.449097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.449122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.449277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.449302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.449457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.449482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.449712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.449739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.449967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.449993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.450173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.450201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.450399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.450423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.450626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.450662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.450840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.450865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.451077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.451105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.451273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.451301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.451502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.451530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.451761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.451785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.452009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.452043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.452239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.452266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.452466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.452494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.452693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.452718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.452914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.452942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.453133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.453161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.453347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.453375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.453571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.453596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.453790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.453817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.453993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.454021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.454194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.454222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.454422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.454447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.859 qpair failed and we were unable to recover it. 00:25:35.859 [2024-07-16 01:02:10.454645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.859 [2024-07-16 01:02:10.454673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.454868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.454908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.455113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.455141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.455314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.455339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.455493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.455518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.455678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.455704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.455932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.455960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.456134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.456159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.456357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.456385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.456610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.456634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.456788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.456813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.456999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.457024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.457216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.457244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.457513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.457563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.457750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.457778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.458002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.458031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.458239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.458267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.458488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.458516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.458717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.458745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.458954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.458979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.459179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.459207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.459456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.459509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.459702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.459729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.459933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.459958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.460132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.460160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.460320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.460348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.460540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.460568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.460769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.460794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.460994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.461022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.461246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.461274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.461471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.461498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.461699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.461724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.461896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.461924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.462120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.462147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.462308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.462335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.462529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.462553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.462734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.462762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.462933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.462961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.463156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.463184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.463363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.463388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.463615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.463642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.860 qpair failed and we were unable to recover it. 00:25:35.860 [2024-07-16 01:02:10.463837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.860 [2024-07-16 01:02:10.463865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.464068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.464095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.464304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.464328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.464535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.464562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.464758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.464784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.464963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.464988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.465163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.465188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.465384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.465412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.465582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.465611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.465809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.465837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.466064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.466089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.466290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.466317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.466513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.466540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.466762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.466789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.467008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.467034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.467232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.467263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.467461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.467486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.467686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.467714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.467945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.467970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.468175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.468203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.468378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.468406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.468625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.468652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.468888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.468914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.469091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.469119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.469339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.469366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.469591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.469618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.469817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.469841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.470076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.470104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.470324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.470352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.470553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.470578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.470763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.470788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.470957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.470986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.471183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.471211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.471404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.471432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.471652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.471677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.471854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.861 [2024-07-16 01:02:10.471886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.861 qpair failed and we were unable to recover it. 00:25:35.861 [2024-07-16 01:02:10.472095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.472123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.472285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.472312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.472515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.472541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.472741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.472769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.472965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.472993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.473166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.473193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.473389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.473414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.473585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.473613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.473764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.473790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.473986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.474014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.474206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.474231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.474389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.474413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.474609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.474637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.474811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.474838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.475030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.475055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.475282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.475310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.475511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.475560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.475771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.475799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.475968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.475994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.476215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.476242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.476446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.476471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.476650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.476676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.476851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.476883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.477087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.477114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.477312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.477340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.477538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.477565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.477748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.477772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.478001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.478029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.478202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.478229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.478448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.478475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.478658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.478682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.478912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.478940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.479158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.479186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.479404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.479432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.479668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.479693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.479898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.479926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.480093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.480120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.480345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.480373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.480568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.480593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.480762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.480789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.481018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.862 [2024-07-16 01:02:10.481046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.862 qpair failed and we were unable to recover it. 00:25:35.862 [2024-07-16 01:02:10.481243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.481271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.481447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.481473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.481682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.481710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.481893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.481922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.482122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.482149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.482379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.482404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.482579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.482611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.482807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.482835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.483045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.483073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.483252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.483276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.483476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.483504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.483736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.483761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.483920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.483946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.484096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.484121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.484344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.484372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.484606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.484654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.484852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.484887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.485065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.485089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.485269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.485295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.485538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.485586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.485811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.485839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.486071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.486096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.486327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.486354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.486551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.486576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.486777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.486805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.487007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.487032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.487180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.487205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.487355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.487380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.487581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.487606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.487827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.487852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.488014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.488039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.488242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.488267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.488483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.488510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.488732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.488757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.488971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.489000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.489250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.489297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.489466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.489494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.489689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.489713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.489917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.489945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.490143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.490171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.863 [2024-07-16 01:02:10.490370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.863 [2024-07-16 01:02:10.490397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.863 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.490580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.490605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.490780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.490804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.490985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.491013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.491209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.491237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.491398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.491423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.491621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.491649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.491855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.491895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.492066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.492094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.492293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.492318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.492524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.492551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.492745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.492773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.492973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.493002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.493191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.493216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.493408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.493633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.493678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.493869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.493905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.494111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.494136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.494285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.494310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.494510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.494535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.494709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.494734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.494938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.494963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.495146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.495173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.495392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.495438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.495604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.495631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.495852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.495890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.496086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.496114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.496379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.496424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.496587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.496615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.496804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.496829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.497003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.497029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.497272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.497321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.497515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.497543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.497773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.497797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.497997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.498029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.498258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.498283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.498439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.498464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.498621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.498645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.498845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.498873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.499106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.499133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.864 qpair failed and we were unable to recover it. 00:25:35.864 [2024-07-16 01:02:10.499311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.864 [2024-07-16 01:02:10.499338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.499542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.499566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.499739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.499767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.499938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.499967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.500193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.500218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.500405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.500429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.500622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.500650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.500874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.500912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.501085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.501113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.501283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.501308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.501505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.501533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.501692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.501720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.501886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.501914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.502109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.502134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.502359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.502387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.502615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.502640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.502830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.502858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.503049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.503074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.503233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.503258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.503405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.503430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.503633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.503660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.503838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.503862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.504080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.504108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.504380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.504405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.504543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.504568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.504770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.504795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.504974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.505003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.505279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.505335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.505537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.505562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.505740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.505765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.505932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.505960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.506158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.506186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.506379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.506406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.506600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.506625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.506820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.506847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.507038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.507071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.507278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.507303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.507451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.507476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.507676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.507704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.507954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.508005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.508231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.508256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.865 qpair failed and we were unable to recover it. 00:25:35.865 [2024-07-16 01:02:10.508411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.865 [2024-07-16 01:02:10.508436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.508657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.508685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.508887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.508915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.509112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.509139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.509336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.509361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.509559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.509587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.509776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.509804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.510033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.510058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.510271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.510296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.510500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.510527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.510688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.510717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.510916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.510944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.511148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.511173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.511395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.511423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.511630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.511679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.511898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.511926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.512130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.512154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.512380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.512407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.512646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.512674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.512889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.512915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.513056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.513080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.513309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.513340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.513615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.513643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.513864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.513900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.514079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.514103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.514303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.514330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.514612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.514659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.514848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.514883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.515072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.515096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.515323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.515351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.515574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.515602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.515798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.515825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.516022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.516047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.516250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.516277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.516441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.516468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.866 [2024-07-16 01:02:10.516645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.866 [2024-07-16 01:02:10.516673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.866 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.516870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.516902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.517103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.517131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.517401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.517453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.517625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.517653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.517845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.517869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.518103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.518130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.518406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.518431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.518635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.518675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.518865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.518895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.519069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.519097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.519272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.519299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.519519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.519546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.519764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.519789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.519997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.520026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.520309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.520358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.520525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.520552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.520774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.520799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.520994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.521022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.521268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.521317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.521509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.521537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.521740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.521765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.521968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.521998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.522170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.522198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.522426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.522450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.522649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.522674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.522869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.522903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.523075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.523106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.523331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.523359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.523584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.523609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.523763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.523787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.523940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.523983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.524146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.524174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.524374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.524399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.524619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.524646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.524836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.524864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.525063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.525090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.525295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.525319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.525514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.525542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.525721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.525747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.525908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.525950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.867 qpair failed and we were unable to recover it. 00:25:35.867 [2024-07-16 01:02:10.526153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.867 [2024-07-16 01:02:10.526178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.526389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.526416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.526660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.526712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.526884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.526912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.527117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.527142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.527289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.527314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.527517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.527545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.527742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.527769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.527966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.527992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.528185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.528212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.528443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.528470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.528687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.528715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.528906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.528931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.529126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.529157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:35.868 [2024-07-16 01:02:10.529433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.868 [2024-07-16 01:02:10.529483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:35.868 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.529652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.529681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.529900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.529925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.530105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.530134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.530351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.530378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.530545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.530573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.530736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.530761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.530941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.530967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.531144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.531169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.531347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.531375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.531603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.531628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.531825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.531853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.532032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.532061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.532229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.532257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.532474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.532498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.532696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.532723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.532970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.532999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.533195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.533222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.533404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.533429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.533577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.533617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.533791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.533818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.534043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.534069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.534252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.534276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.534471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.534499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.534651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.534678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.144 [2024-07-16 01:02:10.534868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.144 [2024-07-16 01:02:10.534903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.144 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.535070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.535095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.535292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.535320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.535520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.535545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.535726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.535751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.535908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.535933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.536158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.536185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.536425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.536450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.536625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.536649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.536825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.536850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.537058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.537084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.537265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.537290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.537520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.537548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.537747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.537772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.537925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.537951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.538110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.538157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.538385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.538413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.538618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.538643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.538831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.538856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.539065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.539093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.539282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.539310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.539505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.539530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.539679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.539703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.539924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.539984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.540181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.540208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.540426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.540450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.540621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.540648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.540844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.540872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.541044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.541071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.541274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.541299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.541524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.541552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.541776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.541803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.542002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.542030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.542232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.542258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.542457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.542484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.542755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.542803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.543021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.543049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.543227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.543252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.543446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.543473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.543755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.543806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.544001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.544030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.145 qpair failed and we were unable to recover it. 00:25:36.145 [2024-07-16 01:02:10.544224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.145 [2024-07-16 01:02:10.544249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.544411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.544439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.544667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.544716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.544911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.544939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.545138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.545163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.545381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.545409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.545674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.545723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.545943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.545972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.546169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.546194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.546395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.546424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.546666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.546717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.546908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.546936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.547138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.547163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.547408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.547435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.547723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.547771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.548009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.548034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.548223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.548248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.548457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.548484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.548811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.548866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.549069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.549097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.549302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.549327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.549508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.549532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.549748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.549787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.550014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.550043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.550233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.550258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.550417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.550444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.550706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.550756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.550947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.550975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.551201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.551226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.551435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.551463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.551780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.551828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.552031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.552058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.552260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.552284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.552482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.552509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.552772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.552821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.553019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.553047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.553284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.553309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.553470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.553495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.553668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.553695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.146 qpair failed and we were unable to recover it. 00:25:36.146 [2024-07-16 01:02:10.553916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.146 [2024-07-16 01:02:10.553945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.554124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.554149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.554376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.554404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.554685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.554717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.554935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.554963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.555167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.555192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.555391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.555419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.555718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.555767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.555977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.556002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.556179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.556203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.556436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.556464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.556811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.556857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.557076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.557105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.557279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.557303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.557528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.557555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.557750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.557777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.557972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.558001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.558202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.558227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.558386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.558411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.558610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.558637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.558813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.558840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.559046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.559071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.559266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.559294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.559616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.559668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.559888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.559917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.560118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.560143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.560346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.560373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.560591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.560618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.560838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.560865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.561038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.561063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.561256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.561283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.561547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.561598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.561767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.561796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.561994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.562020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.562225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.562253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.562508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.562536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.562697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.562724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.562895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.562920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.563122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.563150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.563325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.563352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.563544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.563571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.147 qpair failed and we were unable to recover it. 00:25:36.147 [2024-07-16 01:02:10.563772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.147 [2024-07-16 01:02:10.563797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.564025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.564053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.564385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.564437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.564659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.564691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.564870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.564913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.565113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.565140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.565397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.565448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.565639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.565667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.565937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.565962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.566143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.566168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.566349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.566373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.566586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.566626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.566826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.566851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.567076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.567101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.567373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.567420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.567644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.567672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.567873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.567903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.568147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.568175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.568476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.568535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.568727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.568753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.568956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.568982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.569188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.569216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.569463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.569516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.569684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.569711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.569890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.569915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.570113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.570140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.570367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.570394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.570594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.570621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.570820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.570845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.571054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.571079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.571402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.571458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.571678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.571706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.571886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.571912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.572110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.148 [2024-07-16 01:02:10.572138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.148 qpair failed and we were unable to recover it. 00:25:36.148 [2024-07-16 01:02:10.572343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.572368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.572552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.572577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.572780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.572805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.572962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.572989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.573186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.573214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.573387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.573415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.573584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.573609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.573778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.573806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.573999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.574027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.574248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.574276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.574453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.574478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.574675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.574703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.574938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.574963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.575137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.575162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.575340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.575365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.575594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.575621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.575823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.575850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.576078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.576106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.576289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.576314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.576483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.576511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.576731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.576759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.576953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.576981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.577155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.577180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.577379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.577407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.577605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.577633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.577848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.577884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.578066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.578091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.578316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.578343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.578634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.578684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.578903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.578932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.579114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.579139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.579342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.579367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.579702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.579756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.579954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.579982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.580204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.580228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.580398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.580427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.580628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.580674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.580873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.580928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.581120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.581145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.581343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.581371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.581552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.149 [2024-07-16 01:02:10.581579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.149 qpair failed and we were unable to recover it. 00:25:36.149 [2024-07-16 01:02:10.581755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.581783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.581979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.582006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.582185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.582210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.582431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.582459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.582658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.582685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.582888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.582922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.583121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.583148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.583438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.583486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.583711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.583739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.583936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.583962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.584194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.584222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.584448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.584501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.584725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.584753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.584922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.584948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.585171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.585199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.585446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.585474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.585671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.585698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.585874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.585905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.586094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.586122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.586363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.586388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.586583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.586608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.586848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.586873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.587129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.587156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.587446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.587501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.587692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.587720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.587895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.587921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.588113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.588141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.588422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.588471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.588701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.588729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.588928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.588954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.589165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.589193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.589435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.589464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.589685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.589714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.589913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.589938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.590146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.590173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.590422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.590447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.590656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.590696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.590926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.590952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.591156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.591183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.591355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.591383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.150 [2024-07-16 01:02:10.591553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.150 [2024-07-16 01:02:10.591581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.150 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.591807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.591831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.592011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.592036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.592203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.592258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.592506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.592534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.592710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.592735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.592937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.592966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.593139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.593166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.593339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.593367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.593540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.593565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.593790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.593818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.594143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.594202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.594420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.594448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.594628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.594653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.594887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.594915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.595112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.595139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.595337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.595364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.595533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.595558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.595728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.595755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.595963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.595988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.596213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.596241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.596418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.596442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.596676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.596703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.596908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.596937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.597109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.597147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.597351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.597376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.597600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.597627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.597821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.597849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.598047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.598075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.598306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.598331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.598563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.598591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.598759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.598788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.598981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.599009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.599208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.599233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.599415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.599440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.599618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.599643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.599787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.599812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.599965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.599991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.600167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.600196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.600463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.600513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.151 qpair failed and we were unable to recover it. 00:25:36.151 [2024-07-16 01:02:10.600709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.151 [2024-07-16 01:02:10.600736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.600934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.600960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.601166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.601194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.601366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.601394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.601586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.601616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.601843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.601868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.602078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.602106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.602367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.602417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.602636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.602664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.602857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.602887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.603063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.603090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.603261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.603294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.603515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.603542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.603747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.603772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.603945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.603973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.604197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.604250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.604418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.604445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.604643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.604668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.604864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.604907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.605105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.605133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.605294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.605322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.605516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.605541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.605744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.605771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.605982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.606008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.606191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.606216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.606400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.606425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.606622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.606650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.606842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.606869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.607054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.607081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.607285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.607311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.607483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.607507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.607711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.607736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.607980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.608009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.608183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.608207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.608406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.608434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.608633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.608657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.608836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.608861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.609037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.609062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.609291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.609318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.609564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.609614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.609836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.152 [2024-07-16 01:02:10.609864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.152 qpair failed and we were unable to recover it. 00:25:36.152 [2024-07-16 01:02:10.610075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.610100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.610300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.610327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.610616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.610668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.610861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.610896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.611091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.611116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.611274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.611299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.611472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.611497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.611641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.611682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.611888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.611913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.612114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.612142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.612430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.612479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.612696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.612727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.612902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.612927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.613156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.613183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.613430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.613457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.613653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.613681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.613853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.613884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.614109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.614137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.614412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.614461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.614684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.614711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.614941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.614966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.615120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.615145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.615340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.615367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.615553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.615580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.615814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.615838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.616034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.616060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.616337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.616388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.616557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.616584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.616807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.616832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.616983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.617008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.617211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.617259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.617492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.617517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.617720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.617745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.617926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.617968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.618151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.153 [2024-07-16 01:02:10.618191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.153 qpair failed and we were unable to recover it. 00:25:36.153 [2024-07-16 01:02:10.618412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.618440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.618662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.618687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.618863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.618905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.619101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.619128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.619304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.619332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.619562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.619587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.619816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.619843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.620051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.620079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.620250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.620278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.620499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.620524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.620722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.620749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.620939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.620968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.621172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.621200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.621380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.621404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.621607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.621632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.621814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.621842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.622017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.622045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.622247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.622272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.622499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.622527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.622719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.622746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.622970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.622999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.623198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.623223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.623419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.623446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.623726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.623776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.623967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.623995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.624203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.624227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.624374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.624400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.624575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.624600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.624800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.624827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.625048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.625073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.625244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.625271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.625556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.625607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.625825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.625852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.626098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.626123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.626323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.626353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.626526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.626551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.626776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.626804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.627022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.627048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.627215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.627239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.627425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.627453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.627652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.627679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.627872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.154 [2024-07-16 01:02:10.627905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.154 qpair failed and we were unable to recover it. 00:25:36.154 [2024-07-16 01:02:10.628106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.628134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.628349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.628377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.628543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.628575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.628755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.628779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.628980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.629008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.629210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.629235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.629451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.629478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.629651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.629676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.629906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.629934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.630129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.630157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.630379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.630406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.630607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.630631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.630798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.630825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.631047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.631075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.631276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.631304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.631506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.631530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.631705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.631732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.631961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.631990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.632212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.632239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.632464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.632488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.632726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.632753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.632967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.632994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.633184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.633212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.633371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.633395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.633589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.633616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.633814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.633841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.634051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.634079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.634275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.634300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.634500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.634529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.634753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.634781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.634947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.634976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.635174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.635198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.635394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.635421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.635585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.635612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.635810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.635838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.636065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.636090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.636294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.636321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.636516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.636543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.636760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.636787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.636958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.636983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.637188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.637213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.155 qpair failed and we were unable to recover it. 00:25:36.155 [2024-07-16 01:02:10.637421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.155 [2024-07-16 01:02:10.637448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.637640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.637667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.637943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.637968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.638194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.638222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.638416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.638444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.638606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.638634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.638819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.638846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.639077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.639102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.639297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.639325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.639521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.639548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.639776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.639801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.639998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.640028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.640228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.640253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.640430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.640455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.640629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.640654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.640851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.640888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.641105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.641134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.641303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.641331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.641498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.641523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.641693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.641721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.641925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.641954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.642122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.642150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.642331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.642356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.642509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.642533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.642709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.642734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.642958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.642986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.643170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.643195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.643394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.643419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.643598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.643625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.643815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.643850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.644038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.644066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.644271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.644313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.644521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.644548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.644739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.644767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.644945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.644971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.645122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.645167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.645397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.645425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.645622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.645650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.645850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.645882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.646058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.646086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.646288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.646315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.156 qpair failed and we were unable to recover it. 00:25:36.156 [2024-07-16 01:02:10.646512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.156 [2024-07-16 01:02:10.646539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.646747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.646771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.646950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.646976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.647176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.647200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.647381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.647405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.647645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.647670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.647899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.647928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.648226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.648277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.648496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.648523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.648756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.648781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.648983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.649012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.649270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.649320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.649537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.649564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.649770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.649795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.650001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.650030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.650308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.650336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.650534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.650562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.650792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.650817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.650991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.651019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.651238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.651266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.651460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.651488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.651656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.651681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.651885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.651914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.652149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.652176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.652367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.652394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.652569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.652594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.652790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.652819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.653017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.653046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.653267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.653295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.653497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.653527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.653733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.653761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.653965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.653991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.654171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.654196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.654401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.157 [2024-07-16 01:02:10.654426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.157 qpair failed and we were unable to recover it. 00:25:36.157 [2024-07-16 01:02:10.654637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.654664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.654888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.654916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.655116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.655141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.655343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.655368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.655567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.655595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.655786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.655813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.656035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.656063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.656262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.656286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.656460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.656487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.656678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.656706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.656898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.656930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.657129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.657153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.657357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.657384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.657584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.657609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.657830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.657857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.658105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.658130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.658334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.658362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.658625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.658650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.658821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.658845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.659040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.659066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.659260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.659288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.659513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.659563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.659780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.659812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.660014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.660040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.660244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.660271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.660535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.660584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.660803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.660832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.661040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.661067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.661235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.661263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.661494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.661521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.661747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.661775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.661947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.661973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.662154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.662179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.662391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.662432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.662628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.662656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.662895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.662921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.663143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.663169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.663322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.663347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.663546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.663573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.663776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.663801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.664000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.158 [2024-07-16 01:02:10.664028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.158 qpair failed and we were unable to recover it. 00:25:36.158 [2024-07-16 01:02:10.664227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.664254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.664479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.664507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.664718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.664743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.664909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.664941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.665133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.665161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.665363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.665391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.665555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.665580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.665804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.665832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.666041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.666069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.666278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.666305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.666495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.666520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.666700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.666725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.666896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.666924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.667130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.667158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.667363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.667390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.667590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.667619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.667790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.667818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.668019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.668048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.668227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.668252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.668446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.668473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.668711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.668736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.668892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.668917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.669118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.669147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.669359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.669387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.669660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.669709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.669911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.669939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.670519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.670550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.670755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.670781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.670961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.670992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.671216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.671244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.671426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.671451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.671615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.671639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.671839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.671867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.672075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.672104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.672290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.672314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.672496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.672520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.672742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.672792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.673001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.673027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.673177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.673202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.673384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.673409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.673619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.159 [2024-07-16 01:02:10.673644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.159 qpair failed and we were unable to recover it. 00:25:36.159 [2024-07-16 01:02:10.673823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.673847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.674047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.674073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.674265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.674290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.674468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.674493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.674697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.674722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.674866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.674899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.675083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.675108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.675264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.675289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.675473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.675501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.675676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.675700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.675886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.675912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.676068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.676093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.676299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.676327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.676499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.676523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.676725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.676752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.676951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.676976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.677159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.677187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.677363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.677388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.677592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.677619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.677820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.677844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.678019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.678061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.678237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.678261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.678494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.678522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.678712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.678737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.678915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.678941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.679098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.679123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.679316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.679343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.679533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.679579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.679749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.679774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.679956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.679981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.680181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.680209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.680452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.680501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.680684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.680708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.680899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.680925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.681129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.681154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.681423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.681472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.681672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.681700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.681904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.681930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.682089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.682114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.682292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.682317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.682523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.160 [2024-07-16 01:02:10.682548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.160 qpair failed and we were unable to recover it. 00:25:36.160 [2024-07-16 01:02:10.682724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.682749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.682920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.682946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.683150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.683175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.683415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.683443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.683650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.683674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.683884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.683909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.684062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.684086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.684263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.684290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.684472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.684500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.684674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.684699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.684873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.684906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.685080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.685108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.685340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.685364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.685541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.685568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.685790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.685815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.686003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.686029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.686185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.686210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.686391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.686416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.686568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.686593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.686768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.686792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.686955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.686981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.687180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.687205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.687364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.687389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.687604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.687629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.687806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.687830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.688061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.688086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.688279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.688303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.688486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.688511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.688676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.688701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.688901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.688926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.689105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.689133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.689307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.689333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.689480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.689505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.689681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.689706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.689854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.161 [2024-07-16 01:02:10.689896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.161 qpair failed and we were unable to recover it. 00:25:36.161 [2024-07-16 01:02:10.690078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.690111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.690344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.690369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.690514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.690554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.690780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.690805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.691001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.691027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.691205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.691231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.691380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.691406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.691606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.691632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.691869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.691922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.692125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.692151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.692365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.692392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.692563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.692591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.692754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.692782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.692960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.692985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.693215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.693270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.693516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.693547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.693750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.693781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.693991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.694019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.694183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.694210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.694475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.694523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.694744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.694775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.694953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.694979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.695176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.695204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.695440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.695500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.695706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.695735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.695976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.696003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.696201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.696238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.696476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.696510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.696708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.696737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.696914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.696939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.697161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.697192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.697476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.697525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.697749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.697778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.697973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.697999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.698166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.698192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.698386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.698411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.698624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.698652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.698829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.698854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.699045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.699071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.699431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.699487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.699715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.162 [2024-07-16 01:02:10.699741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.162 qpair failed and we were unable to recover it. 00:25:36.162 [2024-07-16 01:02:10.699927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.699954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.700107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.700133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.700454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.700512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.700715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.700751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.700933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.700960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.701135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.701179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.701376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.701406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.701621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.701646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.701822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.701848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.702065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.702091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.702338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.702384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.702552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.702581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.702784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.702810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.703004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.703031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.703236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.703264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.703432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.703461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.703689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.703715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.703897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.703941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.704100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.704128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.704317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.704344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.704542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.704568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.704733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.704769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.705023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.705055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.705219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.705245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.705409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.705435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.705608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.705637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.705857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.705904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.706107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.706133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.706319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.706344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.706519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.706547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.706769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.706798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.706985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.707013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.707163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.707189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.707381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.707406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.707639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.707693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.707925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.707952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.708133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.708169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.708399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.708428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.708622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.708651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.708886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.163 [2024-07-16 01:02:10.708913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.163 qpair failed and we were unable to recover it. 00:25:36.163 [2024-07-16 01:02:10.709095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.709121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.709313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.709359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.709663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.709716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.709937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.709964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.710119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.710145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.710320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.710350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.710581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.710610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.710796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.710824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.711024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.711056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.711235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.711269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.711589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.711635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.711843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.711871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.712093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.712125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.712324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.712367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.712659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.712711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.712949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.712977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.713166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.713201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.713427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.713453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.713651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.713713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.713942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.713969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.714127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.714154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.714345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.714371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.714581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.714623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.714807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.714837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.715016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.715042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.715248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.715278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.715507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.715569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.715789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.715818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.716038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.716065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.716273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.716587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.716642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.716843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.716872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.717061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.717089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.717258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.717287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.717511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.717560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.717754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.717783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.717995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.718023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.718189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.718217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.718470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.718520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.718739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.718767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.164 qpair failed and we were unable to recover it. 00:25:36.164 [2024-07-16 01:02:10.718981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.164 [2024-07-16 01:02:10.719008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.719211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.719239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.719507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.719562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.719762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.719789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.719988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.720015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.720216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.720245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.720485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.720510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.720664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.720702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.720887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.720914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.721092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.721120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.721324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.721349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.721516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.721543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.721696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.721722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.721922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.721970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.722180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.722210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.722439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.722476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.722660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.722686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.722887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.722917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.723115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.723144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.723349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.723378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.723548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.723574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.723771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.723801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.723992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.724033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.724262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.724302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.724508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.724545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.724757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.724786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.724973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.725003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.725233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.725263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.725442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.725477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.725677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.725706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.725929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.725960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.726125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.726154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.726364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.726399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.726617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.726646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.726817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.726846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.727038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.727078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.727291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.727317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.727525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.727553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.727726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.727756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.727963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.165 [2024-07-16 01:02:10.728004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.165 qpair failed and we were unable to recover it. 00:25:36.165 [2024-07-16 01:02:10.728200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.728226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.728405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.728432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.728637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.728667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.728842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.728871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.729110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.729137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.729307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.729346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.729598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.729628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.729831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.729862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.730066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.730092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.730311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.730353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.730577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.730615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.730842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.730871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.731063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.731090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.731240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.731279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.731527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.731578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.731801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.731831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.732015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.732042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.732211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.732238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.732388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.732415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.732617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.732643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.732862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.732904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.733097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.733124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.733339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.733403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.733612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.733638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.733839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.733864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.734080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.734109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.734322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.734352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.734541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.734571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.734777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.734805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.735022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.735052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.735276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.735342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.735565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.735594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.735778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.735805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.735966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.735993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.166 qpair failed and we were unable to recover it. 00:25:36.166 [2024-07-16 01:02:10.736180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.166 [2024-07-16 01:02:10.736211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.736408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.736437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.736608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.736634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.736834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.736864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.737070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.737103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.737306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.737334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.737510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.737537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.737762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.737792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.737990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.738019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.738195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.738226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.738427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.738453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.738611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.738637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.738796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.738821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.739043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.739073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.739254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.739279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.739437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.739464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.739633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.739662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.739834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.739864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.740053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.740078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.740250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.740305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.740586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.740635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.740862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.740915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.741094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.741119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.741325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.741352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.741541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.741569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.741740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.741772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.741982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.742008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.742182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.742215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.742494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.742546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.742740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.742768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.742969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.742997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.743198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.743226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.743533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.743587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.743803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.743832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.744038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.744063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.744294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.744323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.744551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.744600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.744801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.744830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.745067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.745094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.745296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.745326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.167 [2024-07-16 01:02:10.745578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.167 [2024-07-16 01:02:10.745629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.167 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.745869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.745904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.746088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.746114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.746312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.746353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.746588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.746616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.746797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.746825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.747022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.747049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.747231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.747264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.747553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.747582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.747742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.747771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.747980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.748007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.748182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.748208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.748463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.748512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.748683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.748712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.748940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.748966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.749175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.749204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.749402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.749432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.749587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.749615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.749854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.749887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.750066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.750099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.750346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.750395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.750588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.750617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.750863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.750903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.751121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.751147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.751323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.751349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.751560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.751600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.751807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.751833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.752023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.752050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.752266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.752296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.752461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.752490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.752676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.752712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.752929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.752959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.753156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.753184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.753367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.753397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.753588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.753625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.753799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.753827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.754005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.754045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.754276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.754304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.754475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.754500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.754692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.754719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.754899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.168 [2024-07-16 01:02:10.754929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.168 qpair failed and we were unable to recover it. 00:25:36.168 [2024-07-16 01:02:10.755118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.755146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.755309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.755334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.755564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.755592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.755815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.755842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.756061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.756087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.756297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.756322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.756526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.756554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.756751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.756780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.756970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.756999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.757227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.757252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.757426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.757453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.757679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.757727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.757957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.757983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.758135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.758160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.758347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.758374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.758633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.758682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.758918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.758947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.759145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.759171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.759374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.759406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.759646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.759674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.759870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.759903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.760130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.760155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.760332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.760360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.760593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.760643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.760812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.760840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.761054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.761080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.761240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.761266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.761457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.761508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.761732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.761759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.761957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.761983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.762156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.762186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.762396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.762443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.762667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.762695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.762866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.762896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.763124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.763152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.763408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.763456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.763648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.763676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.763853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.763894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.764103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.764146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.764314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.169 [2024-07-16 01:02:10.764342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.169 qpair failed and we were unable to recover it. 00:25:36.169 [2024-07-16 01:02:10.764505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.764533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.764758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.764782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.765017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.765045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.765289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.765336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.765533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.765561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.765741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.765768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.765932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.765961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.766145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.766170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.766366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.766393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.766587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.766612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.766812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.766841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.767046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.767075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.767299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.767326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.767497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.767522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.767746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.767774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.768009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.768038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.768260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.768288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.768490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.768515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.768710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.768744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.768976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.769002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.769173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.769201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.769401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.769427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.769590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.769618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.769817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.769845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.770025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.770053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.770256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.770281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.770478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.770506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.770705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.770733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.770920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.770949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.771155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.771179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.771346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.771373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.771558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.771583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.771780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.771808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.772012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.772038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.772232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.772259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.170 qpair failed and we were unable to recover it. 00:25:36.170 [2024-07-16 01:02:10.772422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.170 [2024-07-16 01:02:10.772449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.772642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.772670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.772869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.772902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.773098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.773126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.773365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.773412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.773634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.773659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.773869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.773900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.774078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.774107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.774382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.774430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.774631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.774659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.774869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.774901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.775059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.775084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.775299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.775327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.775496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.775525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.775724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.775749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.775949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.775975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.776186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.776237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.776432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.776460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.776640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.776665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.776816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.776841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.777028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.777057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.777281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.777308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.777517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.777542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.777764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.777796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.777996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.778025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.778220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.778248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.778473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.778498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.778670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.778697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.778894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.778924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.779101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.779129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.779324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.779349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.779546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.779574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.779765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.779793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.779983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.780011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.780184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.780210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.780403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.780431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.780672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.780717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.780914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.780942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.781122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.781147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.781317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.781345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.171 [2024-07-16 01:02:10.781587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.171 [2024-07-16 01:02:10.781634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.171 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.781825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.781853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.782076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.782101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.782304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.782333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.782551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.782598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.782817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.782846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.783058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.783083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.783283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.783311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.783466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.783495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.783662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.783690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.783899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.783926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.784097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.784125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.784357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.784406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.784599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.784627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.784796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.784823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.784989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.785017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.785190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.785217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.785415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.785442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.785614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.785640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.785836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.785864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.786038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.786066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.786295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.786320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.786494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.786520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.786717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.786751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.786945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.786974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.787146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.787175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.787369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.787394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.787557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.787585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.787820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.787844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.788006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.788032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.788212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.788238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.788443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.788471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.788665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.788693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.788918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.788947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.789124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.789151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.789336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.789361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.789563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.789588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.789773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.789802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.172 qpair failed and we were unable to recover it. 00:25:36.172 [2024-07-16 01:02:10.789999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.172 [2024-07-16 01:02:10.790025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.790248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.790276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.790502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.790530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.790695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.790725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.790922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.790948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.791106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.791131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.791310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.791335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.791515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.791545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.791744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.791769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.791947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.791976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.792213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.792239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.792382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.792407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.792590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.792615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.792853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.792886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.793085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.793111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.793323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.793351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.793525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.793550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.793730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.793756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.793953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.793982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.794174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.794202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.794397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.794422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.794598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.794626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.794801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.794829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.795055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.795083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.795296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.795322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.795552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.795585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.795758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.795787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.795982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.796008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.796190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.796215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.796429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.796458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.796657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.796685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.796891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.796920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.797110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.797136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.797329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.797357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.797597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.797648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.797872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.797922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.798103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.798130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.173 [2024-07-16 01:02:10.798368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.173 [2024-07-16 01:02:10.798396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.173 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.798691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.798743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.798965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.798995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.799217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.799243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.799447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.799475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.799642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.799671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.799900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.799943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.800150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.800175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.800352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.800377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.800553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.800578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.800759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.800784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.800936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.800962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.801139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.801171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.801373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.801400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.801555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.801580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.801766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.801792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.801970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.801996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.802176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.802201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.802381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.802406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.802581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.802607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.802808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.802833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.802994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.803020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.803218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.803244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.803417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.803443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.803639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.803667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.803864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.803895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.804090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.804117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.804320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.804345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.804543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.804576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.804777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.804802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.804978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.805006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.805173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.805198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.805366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.805394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.805669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.805720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.805974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.806001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.174 [2024-07-16 01:02:10.806170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.174 [2024-07-16 01:02:10.806195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.174 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.806365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.806395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.806615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.806645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.806841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.806866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.807048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.807073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.807228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.807255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.807581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.807637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.807839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.807866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.808028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.808054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.808226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.808255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.808512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.808537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.808714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.808739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.808941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.808968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.809154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.809181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.809409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.809461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.809654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.809682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.809868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.809898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.810119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.810146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.810486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.810536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.810735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.810759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.810956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.810982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.811187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.811213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.811559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.811610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.811809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.811834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.812009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.812035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.812258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.812285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.812619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.812676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.812908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.812934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.813085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.813110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.813290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.813317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.813525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.813553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.813748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.813773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.813977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.814003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.814194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.814225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.814423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.814449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.814629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.814654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.814804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.814830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.814997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.815024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.815196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.815239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.815483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.175 [2024-07-16 01:02:10.815510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.175 qpair failed and we were unable to recover it. 00:25:36.175 [2024-07-16 01:02:10.815681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.815707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.815915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.815941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.816102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.816129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.816348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.816376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.816605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.816630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.816806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.816832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.817013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.817038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.817238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.817268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.817466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.817491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.817705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.817730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.817902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.817928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.818129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.818154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.818309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.818334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.818518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.818543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.818705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.818733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.818942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.818968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.819139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.819165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.819366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.819394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.819661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.819718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.819972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.819997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.820145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.820170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.820372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.820400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.820656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.820705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.820935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.820962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.821133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.821158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.821312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.821341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.821537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.821565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.821759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.821787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.821990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.822015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.822237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.822265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.822531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.822594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.822784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.822812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.822995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.823021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.823204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.823233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.823409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.823437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.823631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.823659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.176 [2024-07-16 01:02:10.823856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.176 [2024-07-16 01:02:10.823887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.176 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.824045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.824070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.824254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.824279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.824445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.824473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.824675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.824700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.824901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.824929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.825097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.825124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.825354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.825382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.825571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.825596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.825792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.825835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.826037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.826065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.826263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.826292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.826515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.826540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.826744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.826772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.826952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.826980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.827213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.827241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.827463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.827488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.827665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.827693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.827902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.827929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.828092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.828119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.828297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.828322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.828487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.828516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.828715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.828743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.828923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.828951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.829120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.829147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.829336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.829365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.829655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.829703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.829946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.829974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.830144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.830169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.830346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.830372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.830664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.830714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.830896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.830924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.831095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.831121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.831273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.831299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.831491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.831518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.177 [2024-07-16 01:02:10.831704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.177 [2024-07-16 01:02:10.831732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.177 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.831903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.831929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.832127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.832161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.832344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.832372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.832556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.832584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.832779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.832804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.833016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.833043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.833299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.833350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.833569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.833597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.833788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.833813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.833964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.833989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.834187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.834213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.834402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.834430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.834604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.834629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.834842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.834869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.835083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.835108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.835266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.835291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.835428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.835453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.835647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.835675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.835838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.835865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.836076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.836101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.836257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.836282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.836480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.836508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.836826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.836870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.837100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.837128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.837319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.837344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.837486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.837511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.837773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.837825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.838054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.838082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.838264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.838293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.838521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.838549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.838747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.838775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.838970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.839000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.839202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.839227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.178 [2024-07-16 01:02:10.839443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.178 [2024-07-16 01:02:10.839471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.178 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.839758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.839810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.839993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.840019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.840206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.840231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.840402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.840429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.840599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.840627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.840792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.840821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.841023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.841049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.841219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.841247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.841516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.841566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.841781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.841809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.842044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.842070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.842257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.842282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.842497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.842557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.842733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.842761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.842964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.842990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.843194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.843461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.843486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.843690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.843715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.843908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.843934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.844136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.844164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.844383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.844432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.844660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.844688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.844889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.844915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.845098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.845123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.845350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.845409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.845610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.845638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.845831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.845856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.846050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.179 [2024-07-16 01:02:10.846076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.179 qpair failed and we were unable to recover it. 00:25:36.179 [2024-07-16 01:02:10.846348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.846407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.846616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.846642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.846814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.846839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.847001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.847027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.847284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.847333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.847549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.847577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.847782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.847811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.848046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.848074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.848375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.848427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.848638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.848666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.848863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.848895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.849049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.849074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.849353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.849405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.849634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.849662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.849861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.849891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.850066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.850095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.850250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.850279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.850464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.850492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.850717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.850973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.851001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.851208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.851267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.851443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.851471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.851670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.851695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.851871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.851908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.852128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.852156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.852359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.852388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.852588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.852614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.852834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.852861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.853062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.853090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.853296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.853322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.853499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.853524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.853724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.853753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.853944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.853973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.854175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.854203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.854374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.854400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.854622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.854650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.854851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.854885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.855086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.855114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.855285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.855311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.180 qpair failed and we were unable to recover it. 00:25:36.180 [2024-07-16 01:02:10.855534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.180 [2024-07-16 01:02:10.855562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.855759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.855787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.855959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.855988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.856220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.856245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.856422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.856450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.856713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.856761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.856987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.857015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.857192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.857222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.857421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.857448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.857764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.857823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.858026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.858055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.858263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.858288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.858516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.858544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.858707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.858735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.858927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.858955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.859156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.859181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.859403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.859431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.859673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.859723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.859888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.859917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.860121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.860146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.860375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.860403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.860681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.860732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.860930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.860956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.861132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.861157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.861385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.861413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.861680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.861900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.861929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.862132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.862158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.862385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.862412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.862640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.862690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.862914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.862939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.863117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.863142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.863289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.863314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.863489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.863514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.863746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.863774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.864009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.864035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.864240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.864268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.864593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.864648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.864868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.864901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.865082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.865108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.181 [2024-07-16 01:02:10.865310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.181 [2024-07-16 01:02:10.865340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.181 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.865628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.865693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.865903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.865933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.866110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.866136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.866336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.866364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.866688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.866733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.866933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.866961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.867136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.867167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.867351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.867377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.867556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.867581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.867773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.867801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.867992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.868018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.868178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.868206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.868424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.868472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.868700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.868725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.868937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.868963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.869159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.869187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.869432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.869457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.869632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.869658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.869838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.869864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.870058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.870086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.870390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.870448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.870617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.870645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.870837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.870862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.871070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.871098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.871410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.871464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.871660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.871689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.871866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.871899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.872100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.872128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.872378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.872425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.872626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.872653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.872824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.872849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.873043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.873068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.873276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.873327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.873533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.873561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.873742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.873768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.873925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.873951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.874103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.874128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.874310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.874339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.874536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.874562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.874737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.874766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.182 [2024-07-16 01:02:10.875015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-07-16 01:02:10.875044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.182 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.875238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.875266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.875468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.875495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.875675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.875704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.875867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.875901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.876095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.876124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.876318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.876347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.876515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.876544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.876738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.876766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.876936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.876964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.877163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.877194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.877394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.877423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.877702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.877760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.877993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.878022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.878213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.878239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.878435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.878463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.878626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.878656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.878819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.878847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.879021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.879046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.879218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.879246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.879557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.879619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.879852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.879889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.880074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.880099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.880264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.880292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.880580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.880632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.880873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.880906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.881086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.881116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.881305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.881335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.881604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.881652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.881860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.881908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.882092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.882117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.882297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.882325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.882553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.882600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.882804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.882832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.883040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.883068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.883244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.883272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.883465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.883510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.183 [2024-07-16 01:02:10.883707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-07-16 01:02:10.883735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.183 qpair failed and we were unable to recover it. 00:25:36.184 [2024-07-16 01:02:10.883936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.184 [2024-07-16 01:02:10.883963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.184 qpair failed and we were unable to recover it. 00:25:36.184 [2024-07-16 01:02:10.884122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.184 [2024-07-16 01:02:10.884147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.184 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.884367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.884419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.884619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.884648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.884824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.884849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.885067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.885092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.885357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.885409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.885630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.885658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.885888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.885918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.886101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.886129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.886419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.886469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.886659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.886687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.886865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.886896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.887068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.887098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.887343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.887372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.887601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.887630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.887831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.887856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.888036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.888062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.888244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.888272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.888451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.888479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.888674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.888700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.888883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.888911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.889080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.889109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.889290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.889318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.889512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.889537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.889693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.889720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.889902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.889929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.890108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.890137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.890328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.890354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.890582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.890611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.890816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.890843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.891054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.891081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.891233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.891258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.891439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.891465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.891632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.891658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.891905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.891934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.892137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.892162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.892336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.892364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.892572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.892616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.892791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.892819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.893006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.893032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.893224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.893252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.893507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.893553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.893737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.893765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.462 [2024-07-16 01:02:10.893953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.462 [2024-07-16 01:02:10.893979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.462 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.894206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.894234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.894498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.894546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.894749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.894779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.894990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.895021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.895201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.895229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.895390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.895418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.895579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.895608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.895780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.895807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.896017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.896046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.896293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.896339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.896504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.896532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.896771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.896796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.896975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.897003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.897160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.897187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.897417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.897445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.897638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.897663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.897831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.897859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.898071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.898100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.898335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.898360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.898532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.898558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.898728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.898756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.898933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.898962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.899129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.899158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.899370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.899395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.899588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.899616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.899783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.899812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.900019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.900045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.900196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.900220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.900447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.900475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.900634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.900663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.900900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.900926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.901079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.901104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.901274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.901303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.901471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.901675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.901699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.901850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.901875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.902052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.902082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.902358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.902407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.902599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.902627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.902822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.902847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.903023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.903048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.903251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.903279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.903469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.903497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.903671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.903700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.903939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.903968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.904139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.904168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.904341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.904369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.904573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.904598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.904776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.904804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.905001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.905030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.905202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.905231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.905429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.905455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.905627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.905656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.905888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.905917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.906112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.906142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.906320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.906346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.906579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.906607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.906787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.906815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.907018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.907046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.907246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.907271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.907480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.907508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.907849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.907901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.908096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.908124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.908326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.908362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.908566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.908593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.908783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.908811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.909015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.909045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.909242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.909478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.909506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.909693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.909726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.909954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.909983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.910182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.910207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.910426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.910454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.910654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.910702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.910895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.910938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.911097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.911124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.463 [2024-07-16 01:02:10.911295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.463 [2024-07-16 01:02:10.911322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.463 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.911632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.911684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.911906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.911935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.912133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.912158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.912391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.912418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.912682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.912731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.912902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.912931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.913131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.913160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.913391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.913417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.913639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.913681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.913839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.913866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.914067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.914092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.914295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.914323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.914542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.914570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.914736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.914774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.914957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.914983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.915180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.915208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.915499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.915548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.915745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.915773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.915982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.916008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.916208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.916238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.916549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.916601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.916832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.916860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.917075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.917101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.917271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.917310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.917609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.917658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.917862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.917905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.918111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.918136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.918340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.918367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.918642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.918670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.918887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.918915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.919097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.919123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.919341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.919370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.919611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.919655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.919897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.919924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.920103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.920129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.920355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.920383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.920587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.920612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.920793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.920818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.921000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.921027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.921234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.921263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.921445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.921473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.921640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.921668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.921845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.921872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.922074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.922102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.922320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.922367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.922593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.922622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.922846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.922889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.923071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.923100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.923338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.923387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.923573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.923601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.923824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.923850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.924019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.924045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.924289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.924338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.924537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.924565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.924744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.924769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.924958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.924988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.925229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.925281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.925474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.925502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.925688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.925713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.925881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.925910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.926117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.926145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.926343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.926371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.926550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.926575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.926796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.926825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.927031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.927059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.927259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.927286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.927462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.927487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.927648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.927677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.464 qpair failed and we were unable to recover it. 00:25:36.464 [2024-07-16 01:02:10.927919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.464 [2024-07-16 01:02:10.927948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.928141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.928171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.928389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.928415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.928626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.928654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.928886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.928914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.929102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.929131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.929340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.929365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.929541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.929569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.929761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.929789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.930035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.930064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.930249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.930274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.930503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.930531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.930746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.930776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.930954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.930984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.931166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.931191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.931391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.931419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.931684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.931738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.931921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.931965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.932146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.932175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.932354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.932382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.932664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.932713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.932944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.932973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.933149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.933174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.933381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.933408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.933646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.933697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.933927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.933956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.934121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.934146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.934326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.934351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.934499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.934524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.934717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.934748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.934954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.934980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.935188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.935228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.935429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.935459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.935671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.935699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.935885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.935911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.936085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.936115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.936394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.936444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.936648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.936678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.936891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.936927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.937135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.937163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.937394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.937421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.937652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.937715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.937895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.937923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.938162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.938191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.938362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.938391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.938624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.938654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.938888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.938915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.939078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.939104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.939331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.939361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.939555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.939589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.939773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.939808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.939995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.940022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.940184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.940210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.940390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.940415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.940581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.940607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.940829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.940858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.941066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.941108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.941324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.941352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.941602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.941634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.941839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.941874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.942068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.942094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.942300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.942330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.942532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.942558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.942788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.942817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.943028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.943058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.943232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.943260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.943470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.943505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.943689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.943718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.943887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.943927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.944149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.944188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.944355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.944381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.944595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.944623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.944834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.944864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.945071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.945099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.945367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.945394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.945578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.945608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.945838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.945874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.946068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.465 [2024-07-16 01:02:10.946094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.465 qpair failed and we were unable to recover it. 00:25:36.465 [2024-07-16 01:02:10.946276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.946303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.946505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.946534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.946737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.946767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.946974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.947004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.947205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.947232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.947434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.947463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.947638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.947668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.947849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.947893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.948068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.948095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.948274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.948300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.948504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.948533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.948732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.948768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.948942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.948970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.949147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.949190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.949418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.949447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.949633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.949662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.949936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.949963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.950137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.950179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.950402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.950433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.950613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.950646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.950900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.950933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.951105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.951131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.951379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.951416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.951655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.951684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.951883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.951910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.952079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.952106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.952320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.952350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.952526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.952555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.952769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.952799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.953014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.953041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.953266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.953299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.953510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.953539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.953740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.953767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.953982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.954014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.954186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.954212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.954358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.954385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.954555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.954589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.954802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.954842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.955091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.955126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.955360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.955408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.955607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.955632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.955819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.955854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.956071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.956097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.956280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.956310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.956521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.956557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.956734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.956764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.956994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.957025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.957221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.957265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.957456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.957483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.957663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.957688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.957882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.957909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.958062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.958088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.958241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.958268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.958481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.958515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.958691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.958717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.958883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.958910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.959122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.959149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.959323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.959354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.959536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.959571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.959775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.959805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.959990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.960019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.960256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.960282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.960529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.960559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.960731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.960760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.960967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.960995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.961191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.961218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.961434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.961461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.961619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.961650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.961855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.961893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.962075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.962102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.962267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.962294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.466 qpair failed and we were unable to recover it. 00:25:36.466 [2024-07-16 01:02:10.962476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.466 [2024-07-16 01:02:10.962503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.962761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.962787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.962978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.963005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.963164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.963192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.963339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.963369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.963581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.963606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.963792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.963818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.964003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.964030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.964242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.964270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.964451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.964477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.964653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.964680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.964855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.964894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.965056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.965083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.965280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.965307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.965496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.965525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.965754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.965784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.965950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.965996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.966201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.966227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.966429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.966454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.966655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.966684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.966853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.966898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.967067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.967094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.967316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.967343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.967502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.967537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.967716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.967743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.967919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.967947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.968126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.968156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.968328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.968359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.968533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.968559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.968743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.968770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.968963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.969000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.969163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.969208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.969430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.969457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.969628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.969654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.969840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.969888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.970040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.970066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.970269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.970299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.970491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.970526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.970734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.970761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.970960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.970987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.971234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.971264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.971461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.971488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.971666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.971703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.971923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.971949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.972112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.972139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.972345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.972380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.972621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.972649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.972840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.972870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.973092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.973128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.973297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.973322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.973511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.973539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.973726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.973752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.973970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.974000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.974222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.974248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.974450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.974477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.974650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.974686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.974933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.974967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.975168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.975201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.975435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.975462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.975656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.975682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.975863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.975906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.976105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.976132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.976312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.976342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.976550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.976577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.976767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.976793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.977007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.977044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.977272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.977298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.977455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.977482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.977639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.977664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.977834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.977864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.978069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.978099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.978302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.978341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.978580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.978616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.978820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.978845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.979070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.979104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.979351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.979380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.979578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.979606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.979796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.979826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.467 qpair failed and we were unable to recover it. 00:25:36.467 [2024-07-16 01:02:10.980058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.467 [2024-07-16 01:02:10.980085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.980237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.980262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.980455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.980480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.980687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.980718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.980925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.980954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.981152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.981193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.981399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.981428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.981608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.981633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.981811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.981846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.982032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.982058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.982230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.982259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.982442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.982470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.982651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.982687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.982888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.982916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.983097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.983143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.983327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.983352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.983553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.983579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.983789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.983817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.984009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.984047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.984271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.984297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.984508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.984536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.984734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.984764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.984956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.984982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.985167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.985193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.985399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.985424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.985638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.985669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.985893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.985927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.986168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.986194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.986377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.986402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.986581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.986607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.986785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.986821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.987029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.987056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.987212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.987239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.987419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.987445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.987595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.987621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.987803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.987829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.987986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.988021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.988217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.988247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.988446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.988476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.988695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.988721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.988930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.988958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.989135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.989168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.989363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.989391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.989565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.989592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.989773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.989808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.990031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.990057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.990238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.990274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.990490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.990518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.990703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.990733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.990900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.990930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.991129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.991158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.991346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.991371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.991575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.991602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.991830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.991859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.992035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.992064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.992272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.992300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.992518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.992544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.992728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.992755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.992910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.992941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.993138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.993164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.993344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.993380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.993600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.993630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.993830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.993855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.994077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.994104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.994265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.994291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.994451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.994477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.994625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.994679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.994892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.994918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.995130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.995158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.995356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.995382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.995573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.995601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.995830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.995864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.996078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.996105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.996284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.996309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.996489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.996515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.996675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.996711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.996894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.996921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.997109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.997135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.468 qpair failed and we were unable to recover it. 00:25:36.468 [2024-07-16 01:02:10.997332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.468 [2024-07-16 01:02:10.997376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.997590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.997621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.997801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.997827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.998016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.998043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.998224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.998252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.998426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.998458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.998687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.998715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.998916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.998947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.999172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.999202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.999394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.999419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.999583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.999610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:10.999843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:10.999894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:11.000124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.469 [2024-07-16 01:02:11.000153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.469 qpair failed and we were unable to recover it. 00:25:36.469 [2024-07-16 01:02:11.000349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.000375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.000566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.000592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.000784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.000820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.001033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.001077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.001272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.001298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.001510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.001538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.001704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.001732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.001897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.001931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.002129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.002155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.002349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.002378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.002574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.002602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.002830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.002858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.003094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.003119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.003286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.003314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.003514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.003541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.003700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.003744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.003924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.003950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.004177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.004205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.004404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.004432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.004606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.004634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.004829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.004855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.005069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.005098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.005275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.005303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.005497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.005524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.005753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.005778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.005933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.005959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.006169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.006210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.006441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.006469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.006664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.006691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.006888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.006916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.007111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.007139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.007372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.007400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.007591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.007617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.007817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.007847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.008053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.008081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.008281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.008309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.008512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.008537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.008736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.008763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.008935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.008964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.009163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.009190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.009362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.009387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.009558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.009583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.009787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.009815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.010021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.010065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.010244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.010270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.010466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.010494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.010685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.010713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.010942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.010972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.011118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.011143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.011364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.011392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.011552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.011580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.011769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.011796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.011998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.012023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.012177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.012202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.012349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.012375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.012529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.012556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.012704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.012729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.012874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.012920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.013143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.013171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.013372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.013400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.013566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.013591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.013793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.013822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.014023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.014052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.014250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.014278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.014448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.014473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.014699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.014727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.014959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.014984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.015129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.015155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.015330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.015356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.015558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.015586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.015775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.015803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.015996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.016024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.016227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.016252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.470 qpair failed and we were unable to recover it. 00:25:36.470 [2024-07-16 01:02:11.016409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.470 [2024-07-16 01:02:11.016434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.016618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.016643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.016794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.016819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.017007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.017032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.017246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.017274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.017443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.017471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.017667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.017695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.017895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.017921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.018094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.018122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.018315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.018342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.018543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.018570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.018797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.018822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.019048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.019077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.019287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.019315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.019485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.019517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.019721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.019748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.019945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.019974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.020142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.020170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.020370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.020398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.020575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.020600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.020803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.020831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.020996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.021024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.021256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.021281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.021433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.021458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.021677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.021705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.021886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.021915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.022146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.022171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.022346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.022371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.022539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.022567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.022764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.022793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.023013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.023042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.023250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.023275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.023472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.023500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.023668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.023695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.023893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.023922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.024114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.024139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.024310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.024339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.024497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.024524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.024693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.024722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.024917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.024942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.025100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.025125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.025335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.025377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.025596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.025624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.025822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.025847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.026010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.026037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.026207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.026235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.026396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.026424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.026650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.026675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.026909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.026937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.027109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.027137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.027302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.027329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.027524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.027549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.027708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.027733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.027909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.027935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.028140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.028172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.028363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.028388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.028566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.028591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.028763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.028788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.029014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.029042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.029218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.029243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.029405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.029433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.029620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.029648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.029813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.029842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.030050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.030076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.030270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.030298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.030490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.030518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.030673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.030700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.030901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.030926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.031096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.031124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.031321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.031346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.031524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.031549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.031731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.031756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.031955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.031983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.032200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.032228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.032387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.032415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.032617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.032643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.032839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.032867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.033093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.033121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.033325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.471 [2024-07-16 01:02:11.033353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.471 qpair failed and we were unable to recover it. 00:25:36.471 [2024-07-16 01:02:11.033523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.033550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.033747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.033775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.034003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.034032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.034223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.034251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.034450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.034475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.034680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.034708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.034930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.034959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.035156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.035185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.035412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.035437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.035613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.035641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.035804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.035834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.036022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.036050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.036246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.036271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.036473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.036501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.036694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.036722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.036908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.036941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.037166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.037192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.037387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.037415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.037608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.037636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.037830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.037858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.038063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.038088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.038240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.038265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.038439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.038463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.038644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.038673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.038849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.038874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.039111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.039139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.039334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.039362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.039552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.039581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.039804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.039829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.040038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.040067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.040258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.040286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.040456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.040484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.040688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.040714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.040889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.040915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.041140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.041169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.041389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.041417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.041614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.041639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.041808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.041838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.042021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.042049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.042238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.042266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.042492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.042518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.042744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.042772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.042981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.043010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.043177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.043205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.043378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.043403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.043636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.043663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.043866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.043900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.044101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.044129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.044302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.044328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.044529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.044555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.044742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.044770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.044972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.045000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.045207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.045232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.045471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.045496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.045674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.045700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.045883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.045916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.046115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.046140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.046361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.046389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.046568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.046596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.046785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.046813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.047016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.047043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.047267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.047295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.047495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.047523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.047696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.047724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.047916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.047942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.048096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.048122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.048325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.048366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.048540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.048568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.048742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.048768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.049002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.049031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.049228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.049258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.049447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.472 [2024-07-16 01:02:11.049476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.472 qpair failed and we were unable to recover it. 00:25:36.472 [2024-07-16 01:02:11.049649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.049676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.049862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.049896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.050120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.050148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.050327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.050355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.050559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.050584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.050805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.050832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.051044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.051070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.051223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.051265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.051487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.051512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.051691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.051717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.051849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9813f0 is same with the state(5) to be set 00:25:36.473 [2024-07-16 01:02:11.052112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.052162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.052388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.052415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.052625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.052654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.052859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.052895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.053073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.053098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.053279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.053307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.053504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.053532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.053724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.053752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.053925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.053952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.054105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.054131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.054274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.054300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.054499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.054526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.054749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.054777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.054992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.055019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.055210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.055238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.055464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.055494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.055791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.055820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.056000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.056026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.056206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.056233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.056471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.056500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.056731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.056759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.056934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.056970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.057178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.057203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.057437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.057465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.057644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.057685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.057883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.057909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.058088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.058119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.058306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.058332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.058539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.058569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.058770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.058800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.059024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.059063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.059261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.059316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.059530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.059574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.059811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.059855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.060035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.060061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.060312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.060355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.060571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.060613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.060770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.060796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.060980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.061005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.061176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.061218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.061418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.061461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.061704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.061747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.061927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.061952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.062183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.062225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.062416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.062459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.062630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.062675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.062852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.062883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.063094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.063137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.063348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.063391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.063598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.063641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.063854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.063884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.064093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.064136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.064306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.064349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.064559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.064602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.064804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.064829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.065044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.065088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.065325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.065368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.065570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.065613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.065799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.065824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.066035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.066078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.066322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.066364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.066599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.066642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.066796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.066822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.067064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.067109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.067312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.067356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.067561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.067606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.067813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.067843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.068086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.473 [2024-07-16 01:02:11.068130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.473 qpair failed and we were unable to recover it. 00:25:36.473 [2024-07-16 01:02:11.068362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.068404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.068635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.068678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.068865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.068896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.069078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.069104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.069302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.069344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.069576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.069618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.069825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.069851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.070015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.070041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.070206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.070249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.070420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.070462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.070664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.070706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.070909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.070935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.071138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.071185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.071370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.071398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.071616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.071659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.071863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.071893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.072072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.072099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.072330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.072373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.072603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.072646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.072825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.072850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.073057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.073083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.073283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.073326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.073559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.073602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.073783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.073808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.073955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.073982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.074197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.074240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.074450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.074476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.074672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.074714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.074917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.074943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.075173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.075215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.075442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.075471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.075715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.075758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.075936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.075962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.076203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.076246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.076448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.076478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.076671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.076714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.076868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.076898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.077077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.077103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.077318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.077349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.077583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.077625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.077782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.077807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.078020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.078045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.078225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.078268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.078468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.078496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.078749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.078791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.078987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.079030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.079235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.079278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.079507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.079550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.079720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.079745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.079933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.079962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.080175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.080203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.080460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.080502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.080710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.080736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.080950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.080977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.081180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.081224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.081418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.081460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.081697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.081740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.081964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.082007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.082201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.082244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.082478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.082519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.082719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.082745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.082944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.082973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.083169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.083212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.083422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.083465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.083692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.083735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.083936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.083980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.084158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.084203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.084438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.084481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.084691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.084717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.084923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.474 [2024-07-16 01:02:11.084949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.474 qpair failed and we were unable to recover it. 00:25:36.474 [2024-07-16 01:02:11.085178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.085222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.085417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.085460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.085660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.085702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.085881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.085907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.086113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.086138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.086331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.086377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.086557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.086604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.086761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.086787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.086990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.087020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.087202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.087245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.087449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.087491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.087682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.087724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.087909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.087937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.088144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.088188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.088396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.088423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.088602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.088646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.088847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.088873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.089074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.089101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.089296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.089323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.089529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.089572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.089771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.089796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.089941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.089968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.090171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.090214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.090416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.090460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.090654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.090682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.090887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.090913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.091121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.091146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.091374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.091417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.091626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.091668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.091847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.091874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.092063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.092090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.092303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.092330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.092557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.092599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.092808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.092834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.093038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.093065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.093252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.093296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.093505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.093549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.093721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.093747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.093892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.093918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.094104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.094148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.094389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.094431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.094665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.094707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.094899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.094925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.095132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.095175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.095363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.095406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.095590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.095618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.095809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.095836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.096044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.096088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.096293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.096340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.096548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.096590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.096771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.096797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.096993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.097037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.097239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.097282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.097511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.097553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.097732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.097757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.097980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.098024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.098255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.098297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.098502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.098545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.098693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.098720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.098924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.098950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.099162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.099206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.099406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.099448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.099654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.099696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.099887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.099924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.100154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.100198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.100403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.100446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.100656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.100699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.100908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.100935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.101112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.101137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.101352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.101396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.101574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.101616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.101795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.101821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.102005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.102032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.102215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.102244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.102462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.102505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.102723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.102766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.102991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.103035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.103264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.103306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.475 [2024-07-16 01:02:11.103490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.475 [2024-07-16 01:02:11.103518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.475 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.103716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.103741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.103969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.104013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.104192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.104235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.104459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.104502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.104716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.104741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.104947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.104989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.105218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.105261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.105489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.105532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.105723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.105748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.105945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.105993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.106199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.106241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.106442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.106485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.106687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.106712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.106893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.106919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.107113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.107160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.107388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.107431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.107599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.107642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.107818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.107843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.108005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.108030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.108259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.108301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.108485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.108526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.108702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.108729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.108962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.109005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.109243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.109287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.109514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.109556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.109768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.109793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.110004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.110033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.110247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.110290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.110454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.110496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.110701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.110726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.110910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.110935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.111143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.111185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.111417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.111460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.111632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.111675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.111855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.111894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.112101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.112127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.112304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.112346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.112576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.112619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.112815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.112840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.113025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.113060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.113258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.113301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.113505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.113548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.113728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.113753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.113947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.113990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.114222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.114265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.114469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.114512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.114743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.114786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.114991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.115035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.115239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.115283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.115498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.115529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.115712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.115737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.115967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.116012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.116213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.116255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.116491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.116533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.116734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.116760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.116929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.116958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.117143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.117185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.117418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.117461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.117695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.117737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.117994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.118024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.118237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.118281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.118488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.118516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.118708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.118734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.118932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.118961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.119178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.119226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.119394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.119437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.119641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.119684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.119901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.119927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.120132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.120161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.120374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.120416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.120626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.120669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.476 qpair failed and we were unable to recover it. 00:25:36.476 [2024-07-16 01:02:11.120846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.476 [2024-07-16 01:02:11.120871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.121075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.121117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.121311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.121339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.121562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.121605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.121759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.121784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.121940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.121966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.122159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.122202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.122389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.122431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.122635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.122663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.122843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.122868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.123077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.123102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.123340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.123382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.123618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.123662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.123870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.123910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.124118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.124144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.124314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.124357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.124559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.124602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.124752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.124779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.124956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.125004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.125185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.125228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.125432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.125474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.125678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.125721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.125900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.125926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.126154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.126183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.126396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.126439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.126665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.126707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.126854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.126886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.127116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.127158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.127360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.127403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.127601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.127644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.127820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.127847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.128081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.128124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.128346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.128389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.128569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.128611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.128788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.128814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.129020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.129064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.129263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.129292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.129521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.129563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.129747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.129772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.129977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.130020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.130229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.130273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.130479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.130521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.130699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.130725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.130930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.130956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.131125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.131168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.131396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.131439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.131647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.131690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.131898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.131925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.132129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.132156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.132353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.132395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.132591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.132619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.132832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.132858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.133066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.133091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.133259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.133303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.133529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.133572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.133780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.133805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.134018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.134044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.134257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.134300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.134501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.134542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.134753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.134778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.134979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.135023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.135231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.135275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.135493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.135534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.135737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.135763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.135921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.135949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.136167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.136210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.136437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.136480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.136713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.136756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.136962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.137005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.137210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.137252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.137460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.137503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.137701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.137726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.137935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.137964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.138180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.138210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.138455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.138498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.138675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.138700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.138903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.138929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.139130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.139158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.139353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.477 [2024-07-16 01:02:11.139396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.477 qpair failed and we were unable to recover it. 00:25:36.477 [2024-07-16 01:02:11.139600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.139643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.139845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.139871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.140062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.140088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.140287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.140330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.140559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.140601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.140750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.140775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.140977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.141027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.141206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.141249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.141460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.141502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.141695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.141723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.141951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.141994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.142214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.142256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.142459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.142502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.142708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.142733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.142961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.143004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.143203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.143232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.143477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.143518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.143724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.143750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.143945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.143994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.144200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.144242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.144452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.144494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.144699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.144725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.144900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.144926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.145102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.145144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.145307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.145350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.145551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.145594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.145746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.145772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.145950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.145993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.146221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.146263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.146435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.146479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.146681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.146706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.146885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.146912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.147096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.147121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.147333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.147377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.147604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.147648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.147827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.147853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.148047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.148092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.148289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.148319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.148569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.148612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.148784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.148809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.148981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.149025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.149256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.149298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.149479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.149522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.149669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.149695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.149900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.149926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.150125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.150168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.150399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.150448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.150649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.150693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.150868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.150899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.151070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.151095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.151297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.151340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.151544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.151587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.151762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.151788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.151989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.152015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.152217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.152261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.152458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.152487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.152657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.152683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.152856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.152887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.153094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.153139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.153344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.153388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.153593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.153636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.153808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.153833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.154042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.154068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.154275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.154318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.154561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.154602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.154772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.154798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.154980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.155006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.155176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.155218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.155412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.155440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.155643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.155686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.155897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.155922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.156108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.156151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.156361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.156403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.478 qpair failed and we were unable to recover it. 00:25:36.478 [2024-07-16 01:02:11.156610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.478 [2024-07-16 01:02:11.156653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.156836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.156861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.157044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.157069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.157281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.157308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.157549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.157592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.157773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.157799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.157966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.157992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.158165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.158208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.158406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.158448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.158688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.158731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.158999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.159043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.159246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.159289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.159483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.159525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.159705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.159735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.159953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.159996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.160243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.160284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.160517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.160560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.160717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.160743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.160975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.161028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.161229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.161272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.161475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.161519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.161695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.161721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.161901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.161927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.162133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.162175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.162373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.162416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.162657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.162700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.162882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.162908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.163118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.163144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.163320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.163363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.163592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.163634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.163841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.163866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.164048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.164073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.164251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.164293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.164459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.164499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.164730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.164773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.164979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.165006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.165208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.165250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.165456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.165498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.165719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.165761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.165990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.166033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.166265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.166308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.166506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.166550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.166756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.166782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.166981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.167027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.167229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.167271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.167502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.167545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.167715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.167740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.167946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.167988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.168215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.168259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.168463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.168506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.168684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.168709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.168889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.168915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.169117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.169160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.169359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.169405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.169571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.169614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.169757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.169783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.170012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.170056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.170260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.170304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.170489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.170531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.170711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.170736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.170951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.170994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.171175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.171217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.171421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.171464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.171661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.171704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.171899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.171925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.172127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.172152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.172359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.172402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.172631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.172674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.172874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.172910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.173112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.173137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.173325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.173368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.173578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.173621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.173833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.173858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.174072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.174098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.174306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.174348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.174573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.174615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.174785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.479 [2024-07-16 01:02:11.174810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.479 qpair failed and we were unable to recover it. 00:25:36.479 [2024-07-16 01:02:11.175017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.175044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.175270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.175312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.175512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.175540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.175716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.175742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.175914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.175940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.176147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.176190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.176419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.176462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.176700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.176742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.176935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.176979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.177185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.177213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.177430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.177473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.177703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.177746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.177933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.177962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.178209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.178252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.178455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.178483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.178675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.178701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.178844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.178873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.179064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.179107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.179335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.179377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.179586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.179629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.179807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.179832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.180038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.180082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.180256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.180298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.180477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.180521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.180695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.180720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.180925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.180951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.181181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.181223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.181462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.181505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.181745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.181787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.181964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.181990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.182190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.182235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.182474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.182516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.182750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.182792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.182998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.183042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.183240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.183282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.183483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.183510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.183732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.183769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.184028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.184081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.184326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.184370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.184617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.184661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.184867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.184899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.185078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.185105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.185273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.185317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.185501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.185544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.185775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.185803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.185977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.186003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.186205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.186250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.186451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.186493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.186706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.186928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.186958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.187205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.187233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.187445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.187474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.187674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.187717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.187898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.187924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.188103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.188145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.188347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.188391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.188618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.188666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.188872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.188904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.189049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.189074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.189304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.189347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.189524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.189566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.189745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.189773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.189957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.189983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.190215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.190258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.190433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.190476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.190704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.190748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.190974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.191019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.191201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.191244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.191483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.191527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.191704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.191729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.191947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.191990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.192229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.192273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.192449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.192476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.192618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.192645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.192822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.192848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.193084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.193128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.193302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.193346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.193555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.193598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.480 qpair failed and we were unable to recover it. 00:25:36.480 [2024-07-16 01:02:11.193748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.480 [2024-07-16 01:02:11.193773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.193975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.194004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.194251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.194294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.194498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.194542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.194720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.194746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.194899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.194926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.195160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.195189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.195387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.195429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.195592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.195636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.195799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.195824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.196000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.196044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.196224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.196266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.196442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.196485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.196632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.196661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.196841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.196868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.197091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.197135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.197339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.197367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.197528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.197554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.197742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.197774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.197991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.198034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.198260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.198304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.198514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.198557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.198760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.198785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.198981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.199030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.199246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.199291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.199518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.481 [2024-07-16 01:02:11.199561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.481 qpair failed and we were unable to recover it. 00:25:36.481 [2024-07-16 01:02:11.199725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.199751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.199945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.199989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.200256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.200299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.200538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.200582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.200734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.200759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.200940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.200967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.201167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.201210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.201392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.201437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.201584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.201611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.201761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.201786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.202013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.202042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.202231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.202281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.202485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.202528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.202678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.202703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.759 qpair failed and we were unable to recover it. 00:25:36.759 [2024-07-16 01:02:11.202849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.759 [2024-07-16 01:02:11.202874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.203057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.203085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.203301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.203345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.203517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.203561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.203731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.203757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.203931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.203960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.204188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.204215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.204442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.204485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.204639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.204665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.204844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.204869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.205065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.205091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.205294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.205339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.205538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.205582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.205771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.205797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.205978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.206023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.206202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.206247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.206421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.206465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.206648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.206692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.206872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.206908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.207087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.207113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.207308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.207350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.207528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.207570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.207717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.207742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.207939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.207968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.208193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.208236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.208416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.208443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.208620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.208663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.208812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.208838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.209031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.209075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.209273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.209317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.209519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.209562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.209716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.209741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.209939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.209984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.210164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.210208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.210450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.210478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.210668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.210694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.210904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.210929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.211107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.211151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.211332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.211376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.211595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.211639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.211789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.760 [2024-07-16 01:02:11.211815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.760 qpair failed and we were unable to recover it. 00:25:36.760 [2024-07-16 01:02:11.212022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.212070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.212283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.212326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.212532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.212576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.212776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.212801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.212986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.213012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.213207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.213250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.213485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.213529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.213710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.213736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.213936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.213982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.214148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.214192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.214371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.214413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.214618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.214646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.214834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.214860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.215067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.215111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.215322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.215366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.215599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.215642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.215823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.215849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.216040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.216089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.216296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.216340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.216516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.216564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.216768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.216794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.216990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.217016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.217220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.217263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.217468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.217510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.217686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.217728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.217917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.217959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.218170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.218213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.218420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.218464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.218658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.218687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.218857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.218889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.219070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.219114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.219302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.219330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.219547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.219591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.219791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.219817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.220021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.220066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.220299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.220341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.220537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.220565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.220730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.220756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.220985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.221029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.221231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.761 [2024-07-16 01:02:11.221276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.761 qpair failed and we were unable to recover it. 00:25:36.761 [2024-07-16 01:02:11.221449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.221492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.221672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.221698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.221882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.221908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.222111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.222139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.222327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.222355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.222548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.222591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.222769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.222796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.223005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.223050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.223258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.223302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.223502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.223530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.223699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.223724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.223907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.223933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.224133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.224177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.224403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.224447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.224626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.224669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.224850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.224883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.225041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.225066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.225273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.225321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.225532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.225577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.225764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.225790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.225990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.226034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.226276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.226320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.226500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.226544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.226698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.226724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.226908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.226934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.227110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.227154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.227348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.227376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.227590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.227633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.227810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.227836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.228048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.228074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.228309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.228351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.228552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.228595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.228794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.228819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.228974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.229000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.229210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.229252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.229426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.229469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.229694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.229737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.229928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.229955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.230157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.230185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.230414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.230458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.230662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.762 [2024-07-16 01:02:11.230705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.762 qpair failed and we were unable to recover it. 00:25:36.762 [2024-07-16 01:02:11.230874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.230905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.231085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.231111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.231284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.231326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.231534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.231579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.231786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.231812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.231966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.231992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.232204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.232231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.232453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.232497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.232704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.232734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.232943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.232970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.233167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.233195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.233392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.233421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.233606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.233634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.233842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.233869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.234087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.234113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.234291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.234334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.234587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.234636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.234801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.234827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.235008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.235035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.235215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.235258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.235463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.235491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.235713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.235758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.235928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.235955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.236130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.236159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.236407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.236450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.236636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.236680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.236855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.236885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.237061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.237086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.237275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.237320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.237496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.237544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.237727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.237753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.237967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.237995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.238146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.238188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.763 [2024-07-16 01:02:11.238359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.763 [2024-07-16 01:02:11.238388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.763 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.238647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.238674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.238889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.238917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.239110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.239134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.239337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.239364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.239526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.239553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.239769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.239796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.239983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.240022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.240211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.240240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.240448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.240477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.240775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.240831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.241006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.241033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.241234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.241278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.241485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.241528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.241702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.241748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.241952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.241978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.242148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.242190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.242364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.242407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.242569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.242612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.242790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.242817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.243015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.243058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.243271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.243315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.243527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.243570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.243747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.243774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.243983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.244026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.244233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.244275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.244507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.244551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.244732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.244757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.244943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.244987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.245200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.245243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.245440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.245482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.245630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.245656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.245836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.245861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.246074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.246117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.246347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.246390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.246592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.246636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.246807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.246833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.247052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.247098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.247311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.247354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.247553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.247596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.247766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.247792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.764 qpair failed and we were unable to recover it. 00:25:36.764 [2024-07-16 01:02:11.247954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.764 [2024-07-16 01:02:11.247980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.248151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.248195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.248389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.248432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.248611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.248658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.248808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.248833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.249041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.249085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.249289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.249333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.249516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.249559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.249738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.249764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.249972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.250021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.250226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.250271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.250504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.250548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.250699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.250725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.250930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.250957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.251134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.251176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.251377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.251404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.251604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.251649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.251850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.251880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.252058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.252083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.252294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.252322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.252540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.252584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.252766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.252792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.252972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.252998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.253208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.253236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.253448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.253490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.253692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.253720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.253950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.253994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.254195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.254239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.254426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.254471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.254675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.254717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.254872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.254903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.255059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.255084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.255290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.255334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.255542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.255585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.255735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.255760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.255909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.255935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.256102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.256146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.256349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.256392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.256579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.256622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.256803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.256828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.257036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.257079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.765 qpair failed and we were unable to recover it. 00:25:36.765 [2024-07-16 01:02:11.257258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.765 [2024-07-16 01:02:11.257305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.257479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.257525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.257704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.257730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.257928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.257955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.258163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.258206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.258445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.258487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.258691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.258734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.258959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.259004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.259177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.259224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.259436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.259480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.259635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.259660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.259837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.259861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.260070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.260113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.260282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.260323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.260505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.260547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.260723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.260748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.260896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.260923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.261156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.261200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.261400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.261428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.261647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.261672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.261845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.261871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.262078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.262121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.262330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.262372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.262546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.262591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.262774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.262799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.263006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.263050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.263219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.263261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.263445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.263471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.263648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.263673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.263846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.263871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.264086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.264129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.264356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.264385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.264579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.264607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.264860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.264928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.265131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.265156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.265408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.265455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.265691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.265739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.265968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.265993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.266217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.266245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.266529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.266579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.266803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.266830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.766 qpair failed and we were unable to recover it. 00:25:36.766 [2024-07-16 01:02:11.267039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.766 [2024-07-16 01:02:11.267064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.267223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.267247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.267471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.267498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.267695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.267724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.267920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.267946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.268104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.268129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.268321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.268348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.268592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.268644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.268864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.268898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.269123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.269147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.269359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.269383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.269570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.269597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.269822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.269847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.270059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.270085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.270281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.270308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.270474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.270502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.270695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.270722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.270892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.270933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.271088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.271112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.271313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.271339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.271537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.271564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.271755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.271787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.271983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.272008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.272200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.272226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.272451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.272478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.272699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.272725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.272937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.272962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.273113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.273137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.273349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.273374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.273575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.273602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.273786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.273813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.273996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.274021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.274212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.274239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.274457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.274485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.274686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.274739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.274995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.275022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.275210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.275238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.275458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.275483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.275682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.275709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.275919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.275944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.276148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.276173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.276372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.767 [2024-07-16 01:02:11.276400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.767 qpair failed and we were unable to recover it. 00:25:36.767 [2024-07-16 01:02:11.276562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.276590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.276768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.276795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.276972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.276998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.277227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.277254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.277450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.277474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.277645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.277672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.277864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.277902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.278103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.278128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.278302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.278326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.278499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.278526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.278724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.278753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.278986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.279011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.279176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.279203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.279426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.279451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.279658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.279685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.279872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.279928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.280144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.280186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.280371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.280398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.280591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.280617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.280815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.280842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.281028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.281057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.281263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.281289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.281460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.281484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.281675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.281702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.281898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.281927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.282129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.282154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.282306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.282330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.282508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.282533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.282676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.282700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.282841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.282869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.283061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.283089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.283311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.283335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.283541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.283568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.283762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.283789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.284030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.284260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.768 [2024-07-16 01:02:11.284285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.768 qpair failed and we were unable to recover it. 00:25:36.768 [2024-07-16 01:02:11.284484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.284509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.284717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.284741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.284943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.284971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.285165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.285192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.285397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.285421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.285651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.285678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.285844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.285872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.286105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.286130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.286303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.286330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.286516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.286543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.286742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.286767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.286929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.286957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.287131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.287158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.287359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.287383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.287587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.287612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.287795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.287822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.288041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.288067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.288303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.288328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.288467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.288491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.288660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.288684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.288887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.288914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.289113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.289138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.289385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.289409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.289589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.289613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.289807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.289834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.290067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.290093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.290265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.290292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.290509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.290536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.290737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.290762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.290952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.290980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.291154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.291182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.291378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.291403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.291604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.291628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.291811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.291838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.292025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.292050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.292249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.292276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.292495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.292522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.292742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.292766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.292947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.292977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.293201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.293228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.293455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.293479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.769 qpair failed and we were unable to recover it. 00:25:36.769 [2024-07-16 01:02:11.293717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.769 [2024-07-16 01:02:11.293744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.293962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.293990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.294216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.294240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.294466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.294493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.294650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.294677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.294879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.294905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.295101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.295128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.295348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.295375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.295568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.295593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.295795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.295822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.296017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.296045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.296269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.296298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.296540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.296564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.296762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.296787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.296969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.296995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.297220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.297247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.297466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.297494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.297689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.297714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.297907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.297935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.298122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.298150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.298348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.298372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.298591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.298618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.298810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.298837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.299069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.299095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.299295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.299322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.299547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.299574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.299740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.299764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.299960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.299987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.300207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.300235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.300425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.300449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.300671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.300698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.300858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.300892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.301127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.301152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.301379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.301406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.301569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.301597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.301817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.301841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.302010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.302038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.302233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.302260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.302451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.302475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.302682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.302709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.302926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.302951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.303130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.770 [2024-07-16 01:02:11.303154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.770 qpair failed and we were unable to recover it. 00:25:36.770 [2024-07-16 01:02:11.303329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.303356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.303525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.303552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.303750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.303774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.303971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.303999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.304216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.304244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.304471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.304495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.304693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.304720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.304903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.304931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.305124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.305148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.305320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.305347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.305526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.305553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.305777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.305802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.306000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.306028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.306227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.306254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.306461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.306485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.306710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.306737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.306957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.306986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.307158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.307183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.307404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.307431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.307618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.307646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.307816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.307840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.308045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.308073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.308278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.308302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.308482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.308507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.308666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.308691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.308871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.308901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.309078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.309102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.309279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.309304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.309505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.309529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.309770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.309795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.309968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.309996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.310190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.310217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.310414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.310439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.310635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.310662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.310853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.310887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.311113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.311138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.311334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.311361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.311537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.311568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.311793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.311818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.312016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.312045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.312244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.771 [2024-07-16 01:02:11.312271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.771 qpair failed and we were unable to recover it. 00:25:36.771 [2024-07-16 01:02:11.312473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.312498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.312650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.312674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.312882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.312908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.313117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.313142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.313322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.313350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.313547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.313574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.313774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.313816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.314015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.314041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.314247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.314274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.314496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.314521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.314697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.314726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.314918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.314947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.315121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.315145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.315308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.315338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.315559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.315592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.315845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.315874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.316147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.316179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.316429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.316461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.316662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.316691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.316893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.316936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.317181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.317213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.317456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.317485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.317711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.317743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.317997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.318027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.318260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.318289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.318480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.318513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.318736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.318768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.318993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.319022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.319244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.319290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.319479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.319519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.319746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.319775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.319977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.320021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.320275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.320307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.320533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.320562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.320803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.320835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.321093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.772 [2024-07-16 01:02:11.321122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.772 qpair failed and we were unable to recover it. 00:25:36.772 [2024-07-16 01:02:11.321322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.321350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.321568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.321607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.321858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.321899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.322133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.322161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.322400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.322432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.322673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.322705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.322929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.322958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.323144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.323176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.323369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.323401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.323620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.323649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.323862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.323903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.324149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.324181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.324440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.324469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.324699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.324730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.324942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.324971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.325205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.325234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.325437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.325469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.325675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.325708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.325934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.325963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.326184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.326213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.326440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.326472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.326718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.326747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.327010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.327043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.327233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.327265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.327484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.327513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.327771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.327803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.328006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.328039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.328259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.328288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.328488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.328535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.328785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.328816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.329075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.329104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.329324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.329356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.329575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.329607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.329840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.329869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.330119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.330151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.330395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.330427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.330641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.330670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.330932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.330965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.331213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.331245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.331441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.331469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.331650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.331678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.773 [2024-07-16 01:02:11.331888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.773 [2024-07-16 01:02:11.331934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.773 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.332192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.332220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.332423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.332455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.332700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.332733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.332955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.332984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.333189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.333221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.333465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.333497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.333748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.333776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.334030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.334062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.334283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.334316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.334560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.334590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.334820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.334852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.335117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.335146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.335366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.335394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.335648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.335680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.335908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.335942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.336165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.336194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.336426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.336458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.336646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.336677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.336963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.336992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.337206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.337238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.337428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.337460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.337711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.337740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.338004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.338036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.338264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.338295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.338548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.338576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.338829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.338858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.339088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.339119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.339336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.339369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.339638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.339670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.339905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.339938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.340187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.340216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.340405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.340437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.340686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.340718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.340936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.340965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.341160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.341192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.341418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.341450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.341673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.341701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.341930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.341963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.342204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.342233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.342461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.342489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.342679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.774 [2024-07-16 01:02:11.342711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.774 qpair failed and we were unable to recover it. 00:25:36.774 [2024-07-16 01:02:11.342933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.342965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.343187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.343216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.343441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.343487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.343704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.343736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.343982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.344011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.344265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.344294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.344533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.344565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.344772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.344800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.345024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.345071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.345295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.345327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.345575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.345603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.345809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.345846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.346072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.346104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.346348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.346382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.346638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.346670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.346897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.346929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.347177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.347206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.347462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.347493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.347736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.347768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.347994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.348023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.348201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.348229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.348455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.348487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.348704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.348732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.348944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.348974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.349192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.349225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.349453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.349482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.349738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.349770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.350018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.350050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.350317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.350346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.350579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.350611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.350836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.350868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.351101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.351129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.351382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.351414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.351634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.351666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.351891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.351921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.352164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.352196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.352416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.352447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.352669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.352698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.352948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.352981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.353233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.353261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.353480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.353509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.775 qpair failed and we were unable to recover it. 00:25:36.775 [2024-07-16 01:02:11.353765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-07-16 01:02:11.353797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.354041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.354071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.354297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.354326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.354545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.354577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.354791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.354822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.355066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.355095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.355321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.355353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.355554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.355585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.355808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.355837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.356048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.356080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.356284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.356315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.356539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.356567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.356796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.356828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.357091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.357125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.357315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.357344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.357594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.357626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.357845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.357884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.358134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.358163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.358360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.358400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.358653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.358681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.358946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.358976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.359147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.359196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.359441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.359473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.359695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.359723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.359968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.360001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.360220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.360252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.360504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.360533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.360749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.360781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.361027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.361060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.361286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.361316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.361539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.361571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.361788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.361820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.362048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.362077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.362272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.362304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.362528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.362557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.362757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-07-16 01:02:11.362787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.776 qpair failed and we were unable to recover it. 00:25:36.776 [2024-07-16 01:02:11.362994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.363039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.363254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.363286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.363522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.363550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.363770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.363802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.364020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.364058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.364257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.364285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.364511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.364544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.364762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.364793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.365024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.365053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.365249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.365281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.365521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.365552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.365752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.365781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.365996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.366028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.366216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.366247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.366494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.366522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.366762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.366794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.367016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.367049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.367277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.367306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.367540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.367572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.367789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.367820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.368021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.368056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.368305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.368337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.368537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.368568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.368786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.368815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.369003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.369032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.369288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.369319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.369546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.369575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.369811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.369839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.370074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.370103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.370325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.370354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.370572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.370604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.370858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.370901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.371159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.371188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.371437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.371470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.371746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.371797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.372042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.372071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.372237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.372266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.372508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.372540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.372759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.372789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.372984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.373013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.373212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.373240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.777 qpair failed and we were unable to recover it. 00:25:36.777 [2024-07-16 01:02:11.373504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-07-16 01:02:11.373533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.373785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.373816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.374061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.374090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.374297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.374326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.374596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.374635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.374894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.374927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.375175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.375204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.375434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.375465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.375712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.375744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.376006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.376035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.376242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.376274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.376497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.376530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.376802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.376830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.377134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.377163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.377368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.377396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.377586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.377613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.377847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.377885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.378196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.378229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.378495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.378523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.378738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.378784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.378982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.379014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.379223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.379252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.379476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.379508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.379750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.379782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.380027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.380057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.380263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.380295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.380513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.380545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.380892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.380921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.381203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.381231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.381461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.381494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.381765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.381794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.382031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.382063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.382255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.382287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.382541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.382569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.382810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.382842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.383092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.383121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.383288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.383316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.383547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.383579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.383800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.383832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.384084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.384114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.384335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.384367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.384566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-07-16 01:02:11.384597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.778 qpair failed and we were unable to recover it. 00:25:36.778 [2024-07-16 01:02:11.384867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.384908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.385151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.385201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.385453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.385485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.385739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.385768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.386002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.386034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.386280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.386313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.386554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.386583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.386820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.386851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.387112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.387141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.387364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.387393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.387619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.387650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.387866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.387914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.388108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.388137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.388316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.388345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.388536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.388565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.388781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.388809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.389007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.389036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.389291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.389323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.389545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.389574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.389825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.389856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.390088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.390121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.390369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.390398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.390613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.390646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.390842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.390875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.391111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.391140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.391327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.391356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.391587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.391619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.391849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.391894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.392093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.392125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.392368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.392399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.392601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.392827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.392856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.393103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.393135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.393362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.393390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.393636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.393668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.393914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.393946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.394156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.394184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.394404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.394437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.394693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.394722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.394923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.394952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.395199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.395231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.779 qpair failed and we were unable to recover it. 00:25:36.779 [2024-07-16 01:02:11.395455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.779 [2024-07-16 01:02:11.395487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.395707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.395736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.395944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.395973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.396221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.396254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.396470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.396499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.396723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.396755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.396969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.397002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.397213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.397242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.397413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.397441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.397661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.397693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.397954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.397983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.398215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.398247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.398466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.398498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.398743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.398771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.398984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.399014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.399223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.399254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.399476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.399504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.399714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.399743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.399962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.399995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.400245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.400274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.400519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.400551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.400771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.400803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.401035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.401064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.401292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.401324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.401537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.401569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.401782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.401811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.402039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.402070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.402287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.402319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.402520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.402549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.402739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.402768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.403021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.403059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.403264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.403294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.403493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.403521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.403780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.403812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.404051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.404079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.404283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.404315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.404565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.404597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.404815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.404843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.405085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.405118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.780 qpair failed and we were unable to recover it. 00:25:36.780 [2024-07-16 01:02:11.405320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.780 [2024-07-16 01:02:11.405352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.405572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.405600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.405821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.405853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.406080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.406113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.406357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.406386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.406622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.406650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.406871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.406926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.407158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.407186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.407391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.407423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.407639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.407670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.407871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.407913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.408143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.408176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.408434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.408465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.408688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.408716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.408915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.408944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.409182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.409214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.409462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.409491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.409756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.409784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.410008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.410046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.410296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.410325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.410587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.410616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.410820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.410848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.411058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.411087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.411291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.411337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.411556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.411589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.411824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.411856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.412087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.412116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.412383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.412412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.412650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.412678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.412933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.412965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.413174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.413206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.413433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.413461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.413692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.413725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.413950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.781 [2024-07-16 01:02:11.413982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.781 qpair failed and we were unable to recover it. 00:25:36.781 [2024-07-16 01:02:11.414224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.414253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.414511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.414543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.414763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.414795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.415018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.415047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.415275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.415308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.415501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.415533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.415756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.415785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.415984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.416017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.416203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.416235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.416483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.416512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.416737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.416769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.416989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.417021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.417260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.417289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.417514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.417546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.417762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.417794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.418038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.418067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.418280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.418312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.418531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.418562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.418774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.418803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.419017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.419047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.419246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.419274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.419501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.419530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.419763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.419796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.420017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.420050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.420283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.420312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.420564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.420600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.420818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.420850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.421079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.421108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.421371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.421402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.421636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.421665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.421929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.421959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.422207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.422239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.422460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.422492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.422712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.422740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.422959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.422992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.423235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.423267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.423509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.423537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.423789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.423821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.424055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.424085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.424308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.424337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.424559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.424591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.424778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.782 [2024-07-16 01:02:11.424810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.782 qpair failed and we were unable to recover it. 00:25:36.782 [2024-07-16 01:02:11.425054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.425083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.425339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.425371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.425577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.425610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.425822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.425850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.426087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.426115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.426277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.426305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.426534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.426562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.426794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.426826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.427023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.427052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.427217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.427246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.427461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.427500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.427717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.427748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.428001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.428030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.428243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.428275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.428526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.428558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.428785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.428814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.429087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.429116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.429316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.429345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.429604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.429632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.429825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.429857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.430066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.430095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.430293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.430321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.430542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.430589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.430798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.430829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.431079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.431108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.431363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.431395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.431653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.431685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.431907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.431937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.432143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.432189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.432438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.432470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.432693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.432722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.432944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.432976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.433226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.433258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.433458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.433487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.433709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.433741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.433946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.433981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.434175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.434204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.434406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.434438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.434635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.434668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.434873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.434908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.435136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.783 [2024-07-16 01:02:11.435168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.783 qpair failed and we were unable to recover it. 00:25:36.783 [2024-07-16 01:02:11.435357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.435389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.435592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.435621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.435848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.435903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.436148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.436180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.436388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.436421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.436652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.436685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.436903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.436935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.437121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.437150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.437335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.437368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.437610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.437642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.437869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.437908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.438110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.438142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.438341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.438372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.438621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.438650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.438844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.438881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.439112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.439142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.439310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.439338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.439554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.439586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.439830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.439861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.440076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.440105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.440348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.440379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.440632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.440664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.440899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.440929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.441149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.441181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.441418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.441447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.441641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.441670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.441865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.441906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.442092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.442124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.442367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.442395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.442623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.442655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.442839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.442871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.443102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.443130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.443326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.443357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.443557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.443589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.443774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.443803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.444046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.444079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.444323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.444354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.444569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.444603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.444856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.444898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.445102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.445134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.445385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.445413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.784 [2024-07-16 01:02:11.445658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.784 [2024-07-16 01:02:11.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.784 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.445910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.445942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.446165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.446194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.446393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.446425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.446645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.446677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.446892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.446922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.447146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.447177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.447366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.447398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.447619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.447649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.447871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.448127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.448156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.448418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.448446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.448697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.448728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.448951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.448983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.449237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.449266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.449473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.449505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.449694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.449726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.449915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.449944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.450111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.450141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.450334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.450367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.450573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.450602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.450815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.450847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.451074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.451106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.451320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.451348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.451552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.451584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.451830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.451862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.452127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.452155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.452393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.452422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.452640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.452672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.452944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.452973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.453177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.453209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.453435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.453466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.453665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.453699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.453924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.453956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.454178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.454209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.454419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.454448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.785 qpair failed and we were unable to recover it. 00:25:36.785 [2024-07-16 01:02:11.454698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.785 [2024-07-16 01:02:11.454729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.454951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.454989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.455216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.455244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.455470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.455502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.455698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.455730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.455951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.455980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.456217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.456250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.456496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.456529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.456787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.456816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.457033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.457065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.457263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.457294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.457485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.457513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.457735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.457768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.457959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.457992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.458218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.458247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.458487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.458519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.458732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.458764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.458961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.458990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.459241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.459273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.459503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.459532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.459706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.459735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.459989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.460022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.460215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.460247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.460494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.460522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.460760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.460789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.460989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.461019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.461254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.461282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.461508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.461536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.461706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.461734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.461923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.461953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.462215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.462243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.462442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.462471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.462705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.462734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.462990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.463023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.463242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.463273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.463545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.463576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.463797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.463829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.464051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.464083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.464309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.464366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.464561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.464608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.464819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.464862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.465051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.786 [2024-07-16 01:02:11.465077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.786 qpair failed and we were unable to recover it. 00:25:36.786 [2024-07-16 01:02:11.465256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.465282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.465486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.465529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.465772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.465821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.466001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.466028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.466179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.466205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.466411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.466454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.466631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.466675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.466853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.466887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.467067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.467092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.467267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.467311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.467535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.467578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.467765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.467791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.467966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.467993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.468194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.468242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.468525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.468573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.468722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.468748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.468925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.468951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.469148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.469176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.469388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.469431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.469617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.469661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.469860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.469891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.470093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.470118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.470348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.470392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.470635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.470677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.470882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.470908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.471088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.471113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.471306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.471354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.471596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.471640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.471796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.471821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.472026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.472052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.472233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.472276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.472479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.472521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.472749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.472792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.472974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.473000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.473170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.473213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.473392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.473435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.473603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.473646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.473824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.473849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.474064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.474108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.474345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.474387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.787 [2024-07-16 01:02:11.474596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.787 [2024-07-16 01:02:11.474639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.787 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.474818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.474843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.475083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.475126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.475328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.475371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.475537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.475579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.475763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.475788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.475980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.476006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.476185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.476228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.476448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.476475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.476675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.476719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.476922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.476949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.477124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.477171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.477346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.477376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.477556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.477604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.477787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.477812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.478040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.478083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.478263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.478310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.478502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.478546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.478725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.478750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.478901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.478927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.479120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.479147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.479359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.479387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.479627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.479670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.479845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.479871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.480077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.480120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.480323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.480365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.480545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.480590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.480768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.480793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.480972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.481017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.481249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.481292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.481463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.481506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.481705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.481733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.481906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.481933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.482106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.482149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.482341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.482383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.482555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.482598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.482771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.482797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.482992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.483040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.483241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.483284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.483509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.483552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.483709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.483735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.788 [2024-07-16 01:02:11.483885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.788 [2024-07-16 01:02:11.483912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.788 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.484112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.484156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.484354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.484398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.484593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.484621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.484823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.484848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.485031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.485075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.485229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.485256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.485488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.485529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.485702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.485728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.485932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.485961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.486159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.486202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.486377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.486421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.486604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.486634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.486808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.486834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.487028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.487072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.487303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.487331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.487552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.487595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.487810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.487835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.488053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.488096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.488298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.488341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.488548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.488590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.488795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.488820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.488997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.489023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.489221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.489249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.489442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.489485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.489689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.489715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.489897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.489923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.490105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.490130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.490306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.490334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.490554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.789 [2024-07-16 01:02:11.490597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.789 qpair failed and we were unable to recover it. 00:25:36.789 [2024-07-16 01:02:11.490774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.490799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.490977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.491002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.491177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.491221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.491458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.491501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.491683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.491709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.491893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.491920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.492123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.492166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.492372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.492416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.492623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.492667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.492845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.492871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.493054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.493079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.493266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.493309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.493488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.493531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.493733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.493777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.493936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.493962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.494144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.494187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.494394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.494436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.494612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.494657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.494838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.494863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.495044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.495087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.495285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.495328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.495539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.495567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.495786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.495816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.495974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.496001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.496205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.496233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.496445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.496474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.496657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.496700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:36.790 [2024-07-16 01:02:11.496904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.790 [2024-07-16 01:02:11.496930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:36.790 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-16 01:02:11.497166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-16 01:02:11.497208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-16 01:02:11.497389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-16 01:02:11.497434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-16 01:02:11.497617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-16 01:02:11.497658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-16 01:02:11.497859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-16 01:02:11.497890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-16 01:02:11.498078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-16 01:02:11.498103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-16 01:02:11.498286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.498330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.498508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.498555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.498725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.498750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.498946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.498973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.499190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.499216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.499432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.499475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.499680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.499708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.499881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.499907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.500059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.500084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.500277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.500320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.500494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.500536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.500718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.500763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.500989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.501033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.501210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.501253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.501451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.501479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.501670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.501713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.501873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.501906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.502139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.502181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.502392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.502419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.502619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.502663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.502836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.502861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.503025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.503052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.503290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.503333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.503515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.503559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.503738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.503763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.503941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.503967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.504143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.504186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.504381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.504423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.504619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.504662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.504838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.504863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.505078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.505104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.505313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.505356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.505553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.505581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.505748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.505774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.505998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.506042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.506247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.506290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.506530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.506572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.506752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.506779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.507006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.507049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.507275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.507318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-16 01:02:11.507520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-16 01:02:11.507563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.507716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.507743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.507932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.507977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.508179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.508222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.508423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.508451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.508623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.508649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.508795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.508820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.509013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.509056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.509253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.509282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.509511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.509537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.509706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.509732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.509950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.509994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.510197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.510239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.510443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.510485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.510682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.510708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.510858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.510889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.511080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.511110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.511299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.511341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.511546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.511590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.511768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.511794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.512022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.512067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.512270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.512313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.512524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.512567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.512722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.512747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.512945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.512990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.513223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.513265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.513439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.513485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.513633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.513659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.513836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.513861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.514070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.514098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.514290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.514334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.514541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.514583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.514762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.514787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.514951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.514995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.515171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.515214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.515416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.515458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.515667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.515709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.515889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.515914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.516117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.516161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.516363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.516406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.516610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-16 01:02:11.516653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-16 01:02:11.516808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.516833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.517016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.517042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.517224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.517268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.517504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.517547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.517734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.517759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.517953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.517996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.518203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.518247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.518446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.518489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.518713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.518756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.518928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.518957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.519177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.519206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.519449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.519491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.519667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.519709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.519903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.519929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.520108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.520135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.520376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.520424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.520657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.520700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.520910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.520936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.521109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.521134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.521337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.521378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.521605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.521648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.521833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.521858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.522049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.522075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.522318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.522361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.522563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.522605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.522764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.522791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.522988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.523031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.523178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.523205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.523401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.523444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.523604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.523631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.523806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.523831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.524054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.524098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.524301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.524345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.524576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.524618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.524773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.524799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.525037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.525080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.525283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.525326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.525510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.525553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.525697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.525722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.525884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.525910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.526112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-16 01:02:11.526137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-16 01:02:11.526340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.526382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.526589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.526632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.526814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.526839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.527028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.527053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.527263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.527306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.527540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.527583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.527726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.527752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.527943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.527973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.528195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.528240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.528440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.528483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.528661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.528687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.528862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.529098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.529141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.529341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.529384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.529623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.529670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.529870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.529901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.530044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.530070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.530279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.530321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.530527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.530570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.530713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.530739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.530893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.530919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.531149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.531191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.531430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.531473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.531648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.531691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.531846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.531871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.532074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.532099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.532297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.532339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.532541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.532568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.532792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.532817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.533019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.533045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.533220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.533262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.533495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.533537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.533719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.533744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.533891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.533916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.534115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.534157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.534358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.534400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.534579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.534621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.534800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.534825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.535062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.535104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.535289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.535333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.535543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-16 01:02:11.535570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-16 01:02:11.535756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.535782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.535986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.536029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.536210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.536253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.536453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.536496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.536693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.536718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.536894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.536920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.537150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.537178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.537384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.537425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.537609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.537654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.537865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.537896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.538071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.538096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.538305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.538333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.538515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.538557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.538773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.538803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.538986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.539011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.539214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.539256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.539490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.539533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.539761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.539803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.539951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.539977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.540168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.540210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.540384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.540426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.540625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.540669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.540887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.540914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.541096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.541121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.541282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.541310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.541516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.541558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.541738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.541764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.541996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.542040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.542192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.542218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.542423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.542465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.542668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.542711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.542891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.542916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.543124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.543152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-16 01:02:11.543348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-16 01:02:11.543391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.543537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.543564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.543741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.543768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.543964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.544008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.544239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.544283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.544488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.544531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.544715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.544740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.544935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.544979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.545161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.545204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.545432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.545474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.545680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.545705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.545887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.545913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.546064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.546090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.546289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.546331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.546536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.546578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.546756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.546781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.546941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.546967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.547193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.547234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.547448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.547492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.547693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.547735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.547969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.548017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.548220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.548263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.548482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.548523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.548674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.548699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.548891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.548918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.549104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.549129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.549310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.549352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.549553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.549581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.549769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.549795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.549998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.550024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.550218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.550261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.550461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.550503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.550732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.550775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.550951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.550995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.551240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.551283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.551462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.551504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.551698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.551739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.551933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.551963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.552183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.552225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.552408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.552451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.552618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-16 01:02:11.552660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-16 01:02:11.552844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.552869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.553065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.553109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.553266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.553292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.553526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.553569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.553752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.553777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.554001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.554044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.554271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.554314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.554550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.554593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.554770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.554795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.554996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.555040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.555212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.555254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.555491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.555532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.555714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.555740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.555910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.555954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.556131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.556178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.556382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.556410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.556602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.556644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.556850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.556875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.557068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.557111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.557293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.557341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.557568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.557611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.557769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.557794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.557989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.558033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.558240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.558282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.558455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.558498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.558711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.558753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.558927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.558957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.559153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.559181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.559404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.559447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.559658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.559683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.559838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.559863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.560079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.560108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.560269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.560296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.560471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.560512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.560686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.560711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.560910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.560936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.561105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.561149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.561383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.561426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.561626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.561653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.561843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.561868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-16 01:02:11.562056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-16 01:02:11.562082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.562263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.562306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.562532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.562575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.562747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.562772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.562933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.562959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.563166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.563209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.563386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.563431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.563655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.563698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.563842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.563867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.564060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.564103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.564299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.564342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.564543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.564570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.564740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.564767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.564969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.565013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.565208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.565250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.565483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.565526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.565673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.565698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.565881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.565907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.566124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.566166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.566399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.566446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.566611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.566654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.566826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.566852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.567022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.567065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.567267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.567311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.567533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.567575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.567757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.567784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.567988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.568031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.568200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.568242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.568449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.568492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.568668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.568711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.568859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.568893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.569069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.569112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.569333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.569377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.569622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.569665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.569841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.569866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.570104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.570147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.570309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.570352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.570528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.570570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.570749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.570774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.570980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.571006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.571202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.571249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.571457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-16 01:02:11.571499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-16 01:02:11.571678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.571723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.571902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.571929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.572107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.572150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.572357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.572401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.572599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.572641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.572847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.572886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.573092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.573117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.573355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.573383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.573582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.573609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.573775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.573799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.573999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.574025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.574218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.574245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.574472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.574499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.574829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.574885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.575112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.575137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.575348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.575375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.575651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.575698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.575934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.575959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.576150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.576191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.576389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.576414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.576642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.576669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.576867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.576903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.577134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.577159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.577357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.577384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.577602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.577629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.577896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.577922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.578097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.578122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.578333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.578362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.578560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.578588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.578780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.578808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.578999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.579025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.579228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.579257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.579420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.579448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.579646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.579673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.579849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-16 01:02:11.579874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-16 01:02:11.580065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.580090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.580316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.580344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.580565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.580619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.580807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.580834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.581024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.581050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.581201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.581226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.581403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.581444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.581614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.581641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.581858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.581893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.582083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.582108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.582329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.582354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.582582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.582609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.582804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.582831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.583040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.583066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.583266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.583291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.583519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.583546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.583738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.583765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.583983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.584008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.584205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.584232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.584453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.584480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.584656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.584682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.584888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.584916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.585148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.585173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.585374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.585403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.585593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.585620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.585790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.585817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.586009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.586034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.586207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.586235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.586426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.586453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.586680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.586704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.586933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.586961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.587156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.587183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.587396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.587421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.587619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.587647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.587842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.587869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.588050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.588075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.588295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.588323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.588520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.588547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.588743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.588768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.588973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.589002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.589176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.589204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-16 01:02:11.589400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-16 01:02:11.589425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.589617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.589645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.589842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.589869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.590071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.590096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.590294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.590322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.590521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.590548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.590747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.590774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.591006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.591031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.591207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.591234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.591408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.591433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.591664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.591861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.591901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.592126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.592150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.592378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.592405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.592602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.592629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.592821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.592846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.593054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.593083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.593245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.593272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.593491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.593515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.593667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.593691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.593865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.593896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.594079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.594103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.594298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.594325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.594494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.594525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.594726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.594750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.594917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.594942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.595125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.595152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.595358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.595383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.595578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.595605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.595775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.595802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.596029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.596054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.596260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.596287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.596482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.596509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.596741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.596766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.596972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.596999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.597167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.597195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.597395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.597420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.597618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.597645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.597843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.597870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.598104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.598129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.598356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.598383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.598555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-16 01:02:11.598582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-16 01:02:11.598778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.598803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.599009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.599037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.599257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.599285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.599502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.599527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.599726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.599753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.599917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.599946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.600142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.600166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.600364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.600391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.600623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.600652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.600829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.600855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.601072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.601100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.601291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.601318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.601540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.601564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.601785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.601812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.602034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.602062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.602279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.602303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.602507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.602534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.602724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.602750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.602978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.603003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.603201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.603230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.603414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.603442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.603621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.603645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.603849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.603883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.604110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.604138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.604364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.604389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.604628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.604655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.604847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.604874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.605076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.605101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.605266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.605293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.605463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.605491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.605714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.605738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.605954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.605979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.606176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.606204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.606380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.606405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.606599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.606626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.606797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.606824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.607013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.607039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.607236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.607264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.607464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.607489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.607677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.607702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.607907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.607937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-16 01:02:11.608128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-16 01:02:11.608155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.608357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.608382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.608534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.608558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.608778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.608805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.609029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.609054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.609253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.609281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.609476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.609504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.609675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.609700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.609899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.609932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.610155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.610183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.610406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.610431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.610624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.610652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.610841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.610869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.611101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.611126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.611323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.611351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.611543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.611571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.611744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.611769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.611994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.612023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.612225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.612252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.612430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.612455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.612609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.612635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.612826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.612855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.613073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.613098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.613302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.613329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.613547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.613574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.613798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.613823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.614060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.614088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.614252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.614279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.614499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.614523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.614720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.614747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.614946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.614974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.615168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.615193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.615392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.615419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.615585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.615612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.615811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.615836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.616014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-16 01:02:11.616040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-16 01:02:11.616245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.616286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.616491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.616516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.616738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.616766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.616955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.616984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.617185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.617210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.617414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.617441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.617637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.617665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.617892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.617933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.618107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.618132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.618338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.618366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.618559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.618584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.618744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.618769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.618972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.618997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.619213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.619238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.619446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.619473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.619693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.619720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.619921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.619946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.620120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.620147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.620323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.620351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.620570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.620594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.620801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.620828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.621039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.621064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.621242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.621267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.621432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.621459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.621656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.621684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.621886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.621911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.622110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.622137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.622334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.622362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.622587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.622612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.622783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.622813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.623043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.623072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.623296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.623320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.623494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.623522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.623724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.623749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.623928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.623953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.624179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.624207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.624427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.624454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.624624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.624649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.624843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.624871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.625065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.625093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.625258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.625287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-16 01:02:11.625485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-16 01:02:11.625512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.625703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.625730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.625955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.625980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.626182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.626210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.626405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.626432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.626631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.626656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.626857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.626892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.627073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.627097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.627303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.627327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.627554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.627581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.627798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.627826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.628035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.628060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.628320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.628348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.628567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.628595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.628785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.628812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.629026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.629051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.629249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.629276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.629452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.629477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.629647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.629672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.629872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.629906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.630108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.630132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.630300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.630328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.630516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.630543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.630743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.630768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.630924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.630949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.631128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.631156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.631384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.631408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.631639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.631667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.631892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.631921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.632097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.632122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.632279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.632304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.632504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.632531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.632751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.632776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.632975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.633003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.633200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.633227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.633440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.633465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.633698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.633726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.633911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.633939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.634175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.634199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.634391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.634419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.634606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.634640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.634871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-16 01:02:11.634901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-16 01:02:11.635113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.635140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.635308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.635336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.635535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.635559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.635779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.635807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.636028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.636057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.636280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.636305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.636479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.636506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.636694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.636721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.636958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.636983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.637182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.637209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.637430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.637458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.637661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.637687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.637892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.637933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.638086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.638111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.638309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.638334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.638500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.638527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.638723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.638751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.638948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.638974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.639169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.639196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.639393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.639421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.639643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.639668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.639884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.639912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.640104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.640131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.640334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.640359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.640556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.640583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.640751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.640784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.640984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.641009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.641206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.641234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.641461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.641487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.641692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.641717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.641909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.642132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.642160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.642360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.642385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.642575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.642603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.642771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.642798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.643026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.643051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.643291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.643316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.643486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.643511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.643659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.643684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.643897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.643922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.644162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.644190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-16 01:02:11.644396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-16 01:02:11.644421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.644620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.644649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.644843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.644870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.645077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.645102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.645325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.645352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.645541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.645569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.645803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.645829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.646033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.646061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.646228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.646256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.646452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.646476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.646696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.646721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.646880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.646905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.647067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.647091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.647266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.647294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.647467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.647494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.647689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.647713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.647951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.647977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.648193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.648220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.648452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.648476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.648652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.648680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.648904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.648932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.649123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.649148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.649308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.649336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.649521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.649548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.649745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.649770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.649969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.650002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.650202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.650229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.650428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.650453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.650631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.650659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.650850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.650883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.651061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.651086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.651279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.651307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.651498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.651525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.651723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.651748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.651951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.651979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.652203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.652230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.652417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.652441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-16 01:02:11.652620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-16 01:02:11.652645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.652840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.652867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.653084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.653109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.653281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.653308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.653516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.653543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.653744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.653769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.653947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.653976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.654163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.654191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.654411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.654436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.654638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.654666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.654862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.654895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.655120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.655145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.655344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.655371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.655562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.655589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.655788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.655813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.656018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.656051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.656227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.656255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.656451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.656476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.656675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.656702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.656888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.656916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.657120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.657144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.657346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.657373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.657562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.657589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.657809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.657837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.658018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.658043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.658251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.658278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.658476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.658501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.658735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.658762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.658971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.658997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.659151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.659176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.659411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.659438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.659600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.659627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.659826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.659851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.660070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.660098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.660320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.660348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.660517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.660542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.660743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.660770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-16 01:02:11.660938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-16 01:02:11.660966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.661202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.661227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.661470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.661495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.661676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.661701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.661881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.661906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.662059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.662083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.662274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.662302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.662464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.662489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.662683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.662710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.662886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.662914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.663083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.663108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.663303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.663330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.663499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.663527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.663721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.663745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.663931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.663957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.664109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.664135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.664337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.664362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.664567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.664595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.664786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.664814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.665021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.665051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.665248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.665276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.665440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.665468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.665638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.665663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.665851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.665886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.666089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.666116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.666343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.666367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.666565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.666592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.666785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.666813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.667015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.667040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.667235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.667262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.667435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.667462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.667653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.667677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.667922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.667948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.668128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.668169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.668363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.668388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.668589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.668617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.668789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.668816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.669043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.669068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.669268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.669295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.669490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.669517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.669747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.669772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.669952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-16 01:02:11.669977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-16 01:02:11.670154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.670179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.670354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.670379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.670526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.670550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.670743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.670770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.670966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.670995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.671153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.671178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.671355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.671379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.671551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.671575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.671754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.671779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.671952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.671978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.672182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.672207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.672382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.672410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.672600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.672628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.672835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.672860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.673064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.673092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.673291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.673319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.673481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.673506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.673734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.673761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.673960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.673988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.674190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.674215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.674419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.674447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.674641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.674668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.674870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.674900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.675130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.675154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.675360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.675385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.675629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.675654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.675848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.675893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.676089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.676116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.676323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.676348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.676545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.676572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.676801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.676826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.676977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.677003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.677203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.677230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.677424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.677453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.677678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.677703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.677905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.677933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.678098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.678126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.678328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.678352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.678552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.678580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.678772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.678800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.679029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.679054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-16 01:02:11.679264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-16 01:02:11.679291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.679481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.679509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.679682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.679707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.679885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.679911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.680146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.680181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.680408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.680433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.680672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.680699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.680932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.680965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.681136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.681161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.681366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.681394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.681560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.681587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.681776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.681800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.682012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.682040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.682267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.682292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.682473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.682498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.682700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.682727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.682917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.682945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.683165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.683190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.683399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.683427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.683600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.683628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.683799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.683824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.683997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.684025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.684221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.684248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.684472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.684497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.684701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.684728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.684906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.684931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.685104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.685129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.685332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.685360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.685576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.685603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.685769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.685793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.685948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.685973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.686126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.686168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.686391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.686416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.686593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.686620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.686814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.686841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.687018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.687043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.687194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.687219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.687437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.687465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.687665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.687690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.687904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.687945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.688129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.688172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.688344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-16 01:02:11.688369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-16 01:02:11.688530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.688557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.688751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.688778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.689001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.689026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.689234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.689262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.689478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.689505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.689709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.689736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.689941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.690168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.690195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.690372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.690396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.690577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.690602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.690778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.690806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.690981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.691006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.691199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.691227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.691420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.691448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.691644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.691669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.691873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.691912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.692106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.692134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.692343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.692368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.692556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.692583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.692749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.692777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.692978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.693004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.693163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.693189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.693416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.693444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.693645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.693669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.693869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.693903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.694133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.694158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.694301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.694326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.694522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.694549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.694720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.694747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.694924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.694950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.695131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.695163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.695382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.695410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.695590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.695615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.695819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.695844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.696038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-16 01:02:11.696066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-16 01:02:11.696233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.696258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.696433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.696460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.696648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.696676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.696883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.696908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.697082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.697110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.697306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.697333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.697501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.697526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.697697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.697726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.697886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.697929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.698108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.698133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.698302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.698328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.698517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.698544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.698714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.698739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.698965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.698993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.699192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.699219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.699395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.699420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.699602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.699627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.699789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.699816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.699998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.700023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.700224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.700252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.700404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.700432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.700600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.700625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.700822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.700849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.701031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.701056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.701214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.701238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.701439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.701467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.701636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.701664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.701858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.701887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.702092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.702120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.702336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.702363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.702539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.702563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.702790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.702817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.702987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.703015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.703211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.703236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.703438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.703465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.703665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.703693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.703896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.703921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.704124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.704152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.704373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.704400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.704597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.704622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.704798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.704826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-16 01:02:11.705001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-16 01:02:11.705028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.705222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.705246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.705471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.705499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.705667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.705694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.705889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.705915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.706092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.706119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.706273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.706301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.706485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.706510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.706664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.706690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.706890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.706919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.707093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.707117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.707285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.707312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.707542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.707567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.707733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.707761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.707965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.707991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.708181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.708209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.708377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.708402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.708601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.708629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.708794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.708821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.709025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.709051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.709223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.709250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.709445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.709472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.709672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.709701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.709903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.709931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.710119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.710147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.710344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.710368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.710516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.710557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.710733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.710760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.710936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.710961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.711152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.711180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.711343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.711370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.711565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.711590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.711785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.711813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.712016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.712041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.712222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.712246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.712456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.712483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.712721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.712749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.712940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.712965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.713136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.713163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.713358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.713385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.713604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.713628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.713798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-16 01:02:11.713825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-16 01:02:11.714017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.714046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.714239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.714263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.714494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.714522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.714691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.714718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.714919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.714944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.715145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.715173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.715367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.715394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.715571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.715597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.715784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.715811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.716014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.716043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.716220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.716244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.716411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.716438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.716629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.716656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.716884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.716910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.717115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.717142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.717338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.717365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.717528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.717553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.717775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.717803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.717984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.718009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.718167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.718192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.718417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.718445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.718663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.718695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.718882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.718908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.719089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.719113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.719318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.719345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.719570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.719595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.719747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.719771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.719936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.719961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.720108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.720134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.720311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.720335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.720512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.720539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.720736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.720760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.720964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.720992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.721182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.721209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.721405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.721430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.721607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.721634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.721857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.721888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.722065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.722089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.722298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.722326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.722487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.722514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.722693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.722718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-16 01:02:11.722945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-16 01:02:11.722972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.723150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.723179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.723344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.723369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.723564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.723592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.723762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.723789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.723993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.724019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.724193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.724221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.724411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.724445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.724642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.724667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.724848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.724872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.725089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.725113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.725300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.725325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.725481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.725509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.725708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.725735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.725950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.725975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.726155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.726180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.726360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.726388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.726562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.726588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.726800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.726828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.727034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.727060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.727199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.727223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.727420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.727447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.727609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.727637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.727834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.727859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.728066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.728091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.728286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.728314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.728478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.728502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.728648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.728674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.728845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.728869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.729065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.729090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.729257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.729284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.729477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.729504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.729699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.729724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.729927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.729956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.730123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.730151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.730324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.730349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-16 01:02:11.730518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-16 01:02:11.730546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.730722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.730751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.730927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.730953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.731133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.731158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.731358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.731385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.731557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.731583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.731808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.731835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.732013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.732041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.732216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.732242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.732468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.732495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.732670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.732699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.732902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.732939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.733095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.733124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.733273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.733298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.733503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.733528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.733724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.733752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.733915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.733944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.734156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.734181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.734399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.734426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.734614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.734641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.734831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.734858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.735045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.735071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.735242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.735270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.735482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.735507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.735679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.735707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.735905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.735939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.736115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.736140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.736335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.736362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.736528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.736557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.736792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.736819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.737037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.737062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.737248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.737276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.737476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.737501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.737677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.737705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.737929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.737958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.738131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.738156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.738359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.738384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.738588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.738616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.738822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.738847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.739034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.739063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-16 01:02:11.739235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-16 01:02:11.739264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.739433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.739458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.739616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.739644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.739851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.739886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.740090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.740115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.740280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.740307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.740512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.740538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.740717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.740741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.740914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.740942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.741133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.741161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.741331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.741355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.741522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.741550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.741723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.741751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.741933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.741960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.742122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.742148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.742328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.742355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.742547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.742572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.742775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.742802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.743026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.743054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.743221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.743247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.743437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.743465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.743631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.743659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.743858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.743890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.744075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.744100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.744324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.744353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.744532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.744557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.744718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.744746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.744938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.744966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.745143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.745169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.745368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.745395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.745614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.745642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.745811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.745836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.746032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.746061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.746231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.746260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.746465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.746490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.746691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.746718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.746889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.746918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.747111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.747136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.747315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.747342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.747514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.747542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.747758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.747791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.747994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-16 01:02:11.748020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-16 01:02:11.748224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.748252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.748447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.748472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.748699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.748726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.748922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.748951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.749145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.749169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.749335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.749363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.749550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.749578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.749770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.749797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.750001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.750027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.750255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.750283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.750490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.750515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.750685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.750712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.750918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.750947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.751151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.751176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.751376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.751403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.751599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.751626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.751826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.751850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.752059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.752087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.752257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.752284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.752526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.752551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.752753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.752781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.752995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.753024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.753226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.753251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.753431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.753458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.753627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.753655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.753848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.753898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.754094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.754119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.754302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.754329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.754533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.754559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.754702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.754727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.754963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.754991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.755216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.755241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.755412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.755440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.755659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.755686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.755888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.755931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.756084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.756110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.756313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.756340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.756566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.756591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.756789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.756817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.757001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-16 01:02:11.757028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-16 01:02:11.757208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.757233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.757376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.757401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.757596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.757623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.757788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.757813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.757980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.758007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.758204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.758232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.758431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.758456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.758605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.758629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.758817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.758844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.759025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.759051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.759238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.759266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.759460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.759487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.759663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.759687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.759900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.759929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.760122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.760150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.760348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.760373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.760578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.760606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.760763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.760791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.761027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.761052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.761233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.761261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.761449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.761477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.761637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.761662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.761862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.761899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.762096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.762120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.762298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.762323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.762518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.762545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.762733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.762765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.762997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.763023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.763228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.763256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.763447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.763474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.763644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.763668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.763894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.763922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.764094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.764123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.764342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.764367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.764568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.764596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.764750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.764777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-16 01:02:11.764951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-16 01:02:11.764977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.765148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.765176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.765364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.765391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.765595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.765620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.765815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.765843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.766054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.766080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.766283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.766308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.766544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.766572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.766765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.766793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.767016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.767042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.767222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.767247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.767446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.767474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.767674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.767699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.767900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.767928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.768158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.768183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.768359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.768384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.768583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.768608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.768816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.768845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.769056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.769082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.769255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.769282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.769480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.769507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.769679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.769704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.769886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.769912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.770088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.770113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.770312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.770337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.770562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.770590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.770754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.770781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.770959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.770985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.771165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.771190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.771364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.771391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.771609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.771634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.771841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.771869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.772097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.772125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.772321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.772346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.772547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.772575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.772792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.772820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.773017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-16 01:02:11.773042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-16 01:02:11.773239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.773266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.773495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.773523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.773728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.773753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.773984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.774012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.774200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.774227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.774436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.774460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.774658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.774686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.774909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.774937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.775146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.775171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.775347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.775374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.775570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.775597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.775797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.775824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.776028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.776054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.776256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.776283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.776481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.776506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.776701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.776728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.776960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.776986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.777158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.777183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.777386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.777414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.777637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.777664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.777860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.777902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.778076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.778108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.778282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.778309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.778508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.778533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.778709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.778737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.778927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.778956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.779122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.779147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.779317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.779345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.779534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.779561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.779739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.779763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.779948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.779977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.780158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.780186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.780411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.780435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.780640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.780667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.780888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.780917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.781126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.781151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.781356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.781381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.781587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.781614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.781788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.781813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.782023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.782048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-16 01:02:11.782218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-16 01:02:11.782245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.782462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.782487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.782713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.782740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.782931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.782959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.783158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.783183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.783382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.783410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.783607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.783634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.783830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.783856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.784073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.784101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.784261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.784288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.784490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.784514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.784676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.784703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.784898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.784926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.785108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.785134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.785329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.785357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.785542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.785569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.785767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.785792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.785993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.786021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.786206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.786233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.786455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.786480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.786676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.786703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.786897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.786925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.787092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.787122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.787281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.787306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.787480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.787505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.787708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.787732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.787912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.787940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.788137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.788164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.788388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.788412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.788612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.788639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.788829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.788856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.789088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.789113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.789317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.789344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.789535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.789563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.789782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.789809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.790003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.790028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.790234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.790262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.790462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.790488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.790706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.790733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.790955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.790984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.791178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.791203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-16 01:02:11.791425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-16 01:02:11.791452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.791603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.791631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.791793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.791817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.792016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.792044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.792216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.792243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.792443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.792469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.792668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.792695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.792929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.792957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.793131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.793160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.793402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.793430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.793628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.793655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.793822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.793846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.794031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.794060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.794283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.794310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.794505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.794530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.794732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.794760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.794960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.794985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.795160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.795185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.795361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.795385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.795610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.795637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.795808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.795833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.796071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.796100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.796260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.796287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.796478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.796503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.796655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.796680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.796893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.796934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.797137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.797162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.797384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.797411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.797614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.797638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.797838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.797865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.798060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.798085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.798311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.798338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.798563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.798587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.798804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.798831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.799049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.799075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.799265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.799290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.799460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.799487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.799702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.799729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.799956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.799981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.800207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.800234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.800463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.800491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-16 01:02:11.800694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-16 01:02:11.800723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.800928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.800957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.801112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.801139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.801339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.801363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.801531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.801559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.801783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.801808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.801962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.801987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.802139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.802163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.802319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.802347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.802528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.802552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.802781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.802808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.802981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.803009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.803228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.803253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.803475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.803514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.803735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.803775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.803999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.804028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.804233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.804261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.804416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.804444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.804667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.804692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.804875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.804915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.805116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.805151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.805349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.805374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.805585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.805610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.805838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.805865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.806067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.806092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.806263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.806291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.806510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.806538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.806733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.806761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.806969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.806995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.807149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.807192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-16 01:02:11.807368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-16 01:02:11.807392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.807594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.807622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.807817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.807845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.808735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.808769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.809003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.809032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.809281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.809314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.809514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.809540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.809703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.809729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.809888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.809914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.810093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.810118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.810320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.810348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.810567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.810595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.810803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.810828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.811005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.811034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.811236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.811264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.811439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.811464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.811638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.811666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.811857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.811904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.812104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.384 [2024-07-16 01:02:11.812129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.384 qpair failed and we were unable to recover it. 00:25:37.384 [2024-07-16 01:02:11.812313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.812340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.812535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.812563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.812793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.812819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.813032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.813062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.813287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.813313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.813484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.813509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.813730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.813759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.813983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.814012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.814189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.814214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.814385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.814413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.814607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.814635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.814836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.814862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.815052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.815080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.815248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.815277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.815482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.815507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.815709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.815736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.815961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.815990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.816154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.816179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.816370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.816398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.816588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.816615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.816841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.816866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.817096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.817125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.817329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.817356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.817536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.817561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.817763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.817788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.818044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.818073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.818307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.818333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.818558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.818590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.818814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.818842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.819071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.819096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.819323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.819351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.819538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.819566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.819824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.819852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.820068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.820093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.820269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.820297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.820520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.820545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.820747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.820775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.820970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.820998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.821180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.821205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.821381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.821406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.821579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.385 [2024-07-16 01:02:11.821606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.385 qpair failed and we were unable to recover it. 00:25:37.385 [2024-07-16 01:02:11.821773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.821798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.821993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.822022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.822252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.822277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.822479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.822504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.822682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.822707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.822942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.822971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.823173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.823197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.823393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.823421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.823643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.823670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.823842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.823867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.824068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.824096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.824257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.824285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.824455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.824480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.824630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.824682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.824855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.824899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.825077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.825102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.825262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.825289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.825488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.825516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.825743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.825768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.825999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.826028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.826247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.826274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.826504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.826529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.826729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.826758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.826957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.826986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.827172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.827197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.827406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.827447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.827610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.827638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.827904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.827944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.828129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.828154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.828298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.828324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.828516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.828541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.828770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.828797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.828994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.829022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.829194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.829219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.829399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.829423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.829572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.829597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.829770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.829795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.830004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.830033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.830254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.830282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.830504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.830529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.830701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.830731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.386 qpair failed and we were unable to recover it. 00:25:37.386 [2024-07-16 01:02:11.830943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.386 [2024-07-16 01:02:11.830971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.831174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.831199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.831393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.831421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.831637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.831665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.831887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.831913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.832098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.832126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.832320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.832348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.832543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.832570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.832789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.832817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.832992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.833021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.833219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.833244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.833442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.833469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.833667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.833695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.833896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.833927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.834133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.834159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.834394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.834421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.834656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.834681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.834888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.834917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.835111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.835138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.835345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.835370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.835591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.835618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.835804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.835829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.836007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.836032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.836187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.836211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.836408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.836436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.836606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.836631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.836856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.836892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.837066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.837094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.837265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.837290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.837515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.837542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.837736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.837763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.838000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.838027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.838182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.838206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.838423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.838451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.838656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.838681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.838886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.838914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.839086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.839114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.839306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.839331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.839520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.839548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.839711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.839738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.839916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.839941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.387 [2024-07-16 01:02:11.840120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.387 [2024-07-16 01:02:11.840148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.387 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.840341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.840370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.840582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.840607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.840781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.840809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.840977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.841006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.841181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.841206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.841433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.841460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.841657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.841685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.841888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.841914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.842133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.842161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.842347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.842374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.842595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.842620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.842786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.842814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.843041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.843070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.843270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.843296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.843520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.843548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.843705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.843733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.843971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.843997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.844199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.844227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.844417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.844444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.844636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.844661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.844855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.844888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.845055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.845080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.845260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.845284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.845506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.845533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.845732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.845757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.845932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.845957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.846181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.846208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.846392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.846420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.846589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.846613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.846782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.846809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.847004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.847032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.847205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.847230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.847408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.388 [2024-07-16 01:02:11.847433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.388 qpair failed and we were unable to recover it. 00:25:37.388 [2024-07-16 01:02:11.847607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.847631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.847834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.847859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.848070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.848098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.848265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.848293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.848519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.848544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.848719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.848747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.848940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.848977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.849170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.849195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.849406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.849433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.849635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.849662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.849891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.849917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.850112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.850140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.850362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.850391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.850587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.850613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.850795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.850820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.851006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.851033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.851254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.851279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.851476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.851504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.851695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.851723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.851933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.851958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.852121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.852146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.852359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.852387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.852561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.852587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.852816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.852844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.853077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.853106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.853317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.853342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.853509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.853537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.853707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.853735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.853957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.853983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.854185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.854212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.854414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.854441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.854674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.854699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.854941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.854967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.855144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.855168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.855410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.855435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.855629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.855657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.855853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.855886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.856088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.856113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.856286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.856311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.856542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.856569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.856753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.856777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.389 qpair failed and we were unable to recover it. 00:25:37.389 [2024-07-16 01:02:11.856958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.389 [2024-07-16 01:02:11.856986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.857187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.857212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.857408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.857433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.857607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.857634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.857830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.857858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.858074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.858099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.858299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.858330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.858523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.858551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.858724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.858749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.858941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.858969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.859194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.859222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.859390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.859415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.859608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.859636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.859826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.859853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.860063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.860089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.860294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.860322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.860520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.860547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.860750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.860774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.860944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.860973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.861137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.861165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.861369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.861394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.861589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.861617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.861814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.861841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.862043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.862068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.862286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.862313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.862499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.862526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.862724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.862749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.862948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.862976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.863194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.863222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.863422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.863447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.863651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.863691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.863861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.863896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.864089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.864114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.864306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.864338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.864507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.864535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.864754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.864779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.864949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.864977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.865198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.865225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.865419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.865443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.865599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.865624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.865800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.865825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.390 [2024-07-16 01:02:11.865998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.390 [2024-07-16 01:02:11.866023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.390 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.866215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.866243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.866432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.866461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.866661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.866686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.866908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.866937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.867130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.867158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.867387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.867412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.867612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.867640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.867862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.867896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.868099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.868123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.868324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.868351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.868546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.868574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.868801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.868826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.869001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.869030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.869221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.869249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.869444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.869469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.869672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.869700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.869931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.869957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.870137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.870163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.870388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.870415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.870622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.870650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.870886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.870912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.871098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.871126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.871290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.871319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.871517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.871543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.871745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.871773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.871963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.871992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.872212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.872238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.872473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.872501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.872694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.872722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.872892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.872928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.873136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.873164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.873329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.873357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.873552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.873581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.873813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.873840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.874074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.874103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.874306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.874331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.874533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.874561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.874741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.874768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.874941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.874967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.875168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.875196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.875430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.391 [2024-07-16 01:02:11.875455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.391 qpair failed and we were unable to recover it. 00:25:37.391 [2024-07-16 01:02:11.875659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.875684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.875828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.875853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.876033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.876058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.876236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.876261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.876459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.876487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.876653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.876681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.876890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.876940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.877095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.877120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.877342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.877370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.877575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.877600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.877800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.877827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.878029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.878058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.878236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.878260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.878461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.878488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.878647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.878675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.878868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.878902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.879135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.879163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.879352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.879379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.879612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.879641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.879831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.879855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.880038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.880064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.880211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.880236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.880426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.880453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.880675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.880702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.880897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.880923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.881106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.881134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.881304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.881332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.881533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.881559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.881780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.881808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.881998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.882032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.882261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.882286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.882469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.882494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.882654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.882693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.882908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.882939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.883113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.883142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.883308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.883337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.883572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.883598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.883832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.883896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.884129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.884155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.884370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.884395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.392 [2024-07-16 01:02:11.884609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.392 [2024-07-16 01:02:11.884649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.392 qpair failed and we were unable to recover it. 00:25:37.393 [2024-07-16 01:02:11.884843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.393 [2024-07-16 01:02:11.884871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.393 qpair failed and we were unable to recover it. 00:25:37.393 [2024-07-16 01:02:11.885087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.393 [2024-07-16 01:02:11.885113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.393 qpair failed and we were unable to recover it. 00:25:37.393 [2024-07-16 01:02:11.885308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.393 [2024-07-16 01:02:11.885336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.885548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.885594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.885767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.885797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.885978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.886007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.886233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.886259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.886468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.886494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.886721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.886749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.886946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.886974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.887141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.887166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.887328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.887353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.887528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.887553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.887758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.887783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.887975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.888003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.888192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.888220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.888441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.888466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.888674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.888702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.888875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.888909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.889086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.889112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.889290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.889318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.889535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.889563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.889766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.889791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.890015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.890044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.890224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.890254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.890484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.890509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.890704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.890733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.890969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.890997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.891198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.891224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.891422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.891451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.891650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.891676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.891888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.891914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.892145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.892173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.892371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.892399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.892572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.892597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.892795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.892823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.893005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.893031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.893181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.893207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.893419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.893447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.893717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.893768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.893952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.893979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.894137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.894162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.394 qpair failed and we were unable to recover it. 00:25:37.394 [2024-07-16 01:02:11.894392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.394 [2024-07-16 01:02:11.894420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.894628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.894652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.894852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.894894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.895078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.895106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.895315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.895340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.895568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.895596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.895829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.895858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.896059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.896084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.896302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.896331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.896523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.896551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.896716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.896741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.896894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.896937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.897165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.897190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.897370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.897395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.897571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.897601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.897823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.897851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.898058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.898084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.898254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.898282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.898449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.898477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.898701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.898726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.898955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.898984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.899224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.899252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.899428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.899454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.899678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.899706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.899905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.899933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.900110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.900136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.900362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.900390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.900584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.900630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.900857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.900893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.901057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.901083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.901312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.901340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.901572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.901597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.901762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.901789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.902022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.902048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.902269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.902295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.902500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.902528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.902761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.902810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.903017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.903044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.903208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.903236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.903434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.903461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.903614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.903639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.395 qpair failed and we were unable to recover it. 00:25:37.395 [2024-07-16 01:02:11.903839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.395 [2024-07-16 01:02:11.903869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.904077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.904111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.904344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.904369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.904584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.904609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.904819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.904844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.905080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.905106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.905308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.905336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.905532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.905560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.905759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.905784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.905986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.906015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.906193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.906221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.906423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.906450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.906633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.906659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.906830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.906857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.907088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.907114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.907331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.907359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.907551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.907579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.907779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.907803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.908006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.908034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.908238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.908266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.908464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.908489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.908720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.908748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.908938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.908966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.909163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.909188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.909390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.909418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.909614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.909644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.909848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.909875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.910095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.910123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.910356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.910401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.910610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.910635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.910788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.910813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.911015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.911045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.911221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.911245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.911443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.911471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.911669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.911696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.911939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.911965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.912165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.912192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.912472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.912521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.912940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.912966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.913147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.913171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.913322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.913346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.396 qpair failed and we were unable to recover it. 00:25:37.396 [2024-07-16 01:02:11.913494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.396 [2024-07-16 01:02:11.913519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.913704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.913732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.913955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.913983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.914156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.914181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.914383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.914410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.914641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.914668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.914846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.914870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.915060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.915085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.915255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.915283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.915501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.915526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.915707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.915736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.915934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.915963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.916142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.916167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.916323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.916347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.916647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.916700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.916887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.916913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.917091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.917116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.917293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.917321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.917498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.917522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.917723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.917748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.917968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.917994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.918198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.918222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.918449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.918476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.918793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.918842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.919047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.919071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.919276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.919304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.919529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.919557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.919783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.919807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.920036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.920062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.920259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.920287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.920483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.920508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.920742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.920769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.920974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.920999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.921201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.921226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.921405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.921433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.921699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.921748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.921929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.921954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.922111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.922136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.922347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.922374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.922570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.922594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.922769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.922799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.922987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-16 01:02:11.923015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-16 01:02:11.923252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.923277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.923504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.923532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.923764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.923788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.923973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.923999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.924192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.924219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.924418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.924446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.924650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.924684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.924919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.924962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2749692 Killed "${NVMF_APP[@]}" "$@" 00:25:37.398 [2024-07-16 01:02:11.925144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.925169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.925370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.925395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:37.398 [2024-07-16 01:02:11.925584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.925612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:37.398 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.398 [2024-07-16 01:02:11.925804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.925837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.398 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.398 [2024-07-16 01:02:11.926066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.926092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.926277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.926305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.926529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.926553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.926729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.926754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.926930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.926958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.927124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.927152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.927381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.927406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.927615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.927642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.927862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.927897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.928127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.928152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.928325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.928352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.928547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.928575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.928748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.928777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.928945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.928974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.929174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.929199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.929384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.929409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.929613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.929640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.929872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.929902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.930084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.930109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.930296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.930320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-16 01:02:11.930544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-16 01:02:11.930572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.930744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.930768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.930967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.930997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2750250 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:37.399 [2024-07-16 01:02:11.931194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2750250 00:25:37.399 [2024-07-16 01:02:11.931222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.931445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2750250 ']' 00:25:37.399 [2024-07-16 01:02:11.931475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.399 [2024-07-16 01:02:11.931678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.931706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.399 [2024-07-16 01:02:11.931925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.931954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.399 addr=10.0.0.2, port=4420 00:25:37.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.399 [2024-07-16 01:02:11.932154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.932180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 01:02:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.399 [2024-07-16 01:02:11.932385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.932413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.932605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.932638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.932836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.932861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.933045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.933073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.933278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.933307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.933538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.933562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.933772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.933800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.933978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.934008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.934217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.934245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.934393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.934418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.934616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.934644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.934814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.934841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.935043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.935071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.935278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.935306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.935477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.935502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.935710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.935738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.935961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.935986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.936188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.936212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.936421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.936449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.936610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.936637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.936815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.936845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.937053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.937081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.937250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.937277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.937453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.937478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.937702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.937730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.937948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.937977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.938178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.938203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.938408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.938436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.938633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.938660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-16 01:02:11.938887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-16 01:02:11.938912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.939138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.939166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.939371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.939398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.939595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.939620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.939824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.939851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.940043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.940070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.940255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.940280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.940447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.940472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.940654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.940681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.940892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.940918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.941113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.941142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.941360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.941388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.941559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.941584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.941784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.941811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.941979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.942008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.942169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.942195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.942425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.942486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.942648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.942676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.942930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.942956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.943200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.943228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.943460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.943485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.943634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.943659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.943851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.943885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.944123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.944148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.944357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.944382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.944559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.944587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.944783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.944811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.945007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.945032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.945200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.945229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.945450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.945478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.945645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.945670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.945841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.945870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.946081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.946114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.946316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.946341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.946515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.946540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.946769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.946796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.947020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.947046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.947221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.947248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.947421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.947448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.947642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.947666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.947839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.947867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.948065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-16 01:02:11.948093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-16 01:02:11.948318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.948342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.948520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.948548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.948737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.948764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.948957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.948983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.949182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.949210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.949374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.949402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.949597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.949622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.949846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.949874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.950055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.950080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.950257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.950282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.950480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.950507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.950775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.950826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.951079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.951105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.951309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.951337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.951537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.951564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.951782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.951807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.952010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.952038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.952238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.952267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.952472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.952497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.952698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.952726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.952924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.952952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.953123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.953148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.953346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.953374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.953572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.953600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.953774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.953799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.954024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.954052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.954221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.954248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.954451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.954476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.954672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.954700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.954919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.954947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.955174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.955199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.955373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.955401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.955567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.955595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.955783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.955810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.956015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.956040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.956243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.956271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.956444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.956469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.956673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.956698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.956905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.956947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.957125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.957150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.957321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.957349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-16 01:02:11.957569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-16 01:02:11.957597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.957781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.957806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.958010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.958038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.958235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.958262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.958436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.958461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.958617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.958645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.958823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.958852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.959056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.959081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.959288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.959316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.959508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.959536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.959706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.959730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.959923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.959951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.960153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.960189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.960358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.960382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.960564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.960589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.960736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.960762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.960917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.960944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.961137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.961169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.961334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.961362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.961533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.961557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.961732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.961760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.961989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.962018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.962185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.962210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.962386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.962411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.962618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.962647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.962815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.962840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.963060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.963109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.963370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.963414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.963610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.963639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.963803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.963831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.964054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.964095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.964304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.964340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.964579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.964620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.964894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.964923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.965072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.965099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.965301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-16 01:02:11.965332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-16 01:02:11.965537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.965577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.965829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.965885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.966114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.966154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.966357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.966397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.966603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.966639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.966838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.966886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.967108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.967148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.967375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.967412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.967784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.967848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.968109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.968145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.968403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.968439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.968733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.968796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.969014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.969056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.969266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.969303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.969481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.969517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.969773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.969813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.970046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.970083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.970353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.970388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.970615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.970652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.970940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.970977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.971184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.971220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.971404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.971440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.971672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.971708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.971971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.972012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.972234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.972266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.972473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.972498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.972673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.972698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.972933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.972974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.973180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.973216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.973445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.973484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.973699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.973730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.973933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.973961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.974141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.974176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.974393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.974429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.974626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.974663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.974906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.974948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.975145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.975177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.975375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.975401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.975574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.975604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.975808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.975848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.976061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-16 01:02:11.976098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-16 01:02:11.976331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.976372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.976596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.976637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.976837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.976863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.977093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.977124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.977326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.977356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.977521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.977547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.977745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.977774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.977942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.977977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.978198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.978224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.978462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.978502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.978700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.978740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.978943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.978980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.979033] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:37.404 [2024-07-16 01:02:11.979121] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.404 [2024-07-16 01:02:11.979212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.979253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.979473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.979502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.979677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.979703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.979907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.979939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.980130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.980172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.980390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.980425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.980623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.980664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.980864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.980919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.981146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.981184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.981362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.981389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.981608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.981634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.981794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.981832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.982023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.982078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.982308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.982349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.982558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.982595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.982803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.982831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.983002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.983032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.983213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.983249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.983425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.983462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.983696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.983744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.983942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.983988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.984243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.984284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.984512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.984550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.984759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.984796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.985024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.985063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.985299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.985340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.985574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.985611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.985791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-16 01:02:11.985826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-16 01:02:11.986119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.986163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.986345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.986379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.986547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.986592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.986819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.986849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.987039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.987066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.987267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.987295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.987574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.987608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.989037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.989071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.989297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.989330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.989533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.989570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.989775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.989811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.990057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.990095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.990344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.990382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.990609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.990636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.990844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.990873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.991093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.991130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.991341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.991377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.991609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.991649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.991890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.991928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.992108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.992150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.992357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.992394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.992600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.992636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.992845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.992899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.993087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.993123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.993305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.993341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.993544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.993572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.993753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.993779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.993946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.993977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.994188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.994214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.994374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.994400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.994561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.994598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.994825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.994862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.995262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.995298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.995560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.995609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.995838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.995872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.996065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.996093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.996333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.996360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.996585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.996614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.996774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.996804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.996992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.997019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-16 01:02:11.997176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-16 01:02:11.997202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.997391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.997433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.997668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.997697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.997898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.997925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.998132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.998158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.998344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.998371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.998581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.998608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.998821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.998849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.999683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.999717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:11.999945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:11.999973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.000135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.000162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.000346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.000372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.000552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.000578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.000775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.000804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.000992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.001018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.001201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.001227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.001441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.001471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.001687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.001716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.001958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.001984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.002179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.002214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.002425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.002454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.002684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.002710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.002902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.002928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.003117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.003143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.003374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.003400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.003577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.003604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.003814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.003843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.004031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.004057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.004228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.004259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.004464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.004491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.004663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.004690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.004896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.004940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.005089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-16 01:02:12.005116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-16 01:02:12.005315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.005341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.005544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.005569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.005781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.005807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.005981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.006008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.006162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.006188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.006415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.006443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.006620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.006645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.006848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.006874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.007112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.007142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.007315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.007341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.007540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.007568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.007781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.007821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.008027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.008055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.008263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.008293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.008630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.008681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.008880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.008906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.009100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.009129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.009392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.009438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.009667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.009694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.009888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.009914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.010068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.010093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.010272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.010305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.010486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.010512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.010680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.010706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.010895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.010922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.011079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.011105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.011311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.011345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.011542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.011567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.011754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.011780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.011959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.011985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.012142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.012169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.012318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.012359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.012524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.012549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.012757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.012782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.012983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.013014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.013211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.013237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.013391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.013416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.013613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.013638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.013815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.013841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.014012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.014040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.014210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-16 01:02:12.014235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-16 01:02:12.014381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.014405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.014583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.014610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.014766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.014791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.408 [2024-07-16 01:02:12.015003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.015049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.015228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.015255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.015435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.015478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.015674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.015700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.015886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.015912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.016092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.016118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.016347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.016390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.016594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.016621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.016823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.016851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.017058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.017086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.017269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.017295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.017477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.017506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.017684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.017712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.017907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.017933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.018110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.018135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.018316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.018342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.018546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.018571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.018739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.018764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.018919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.018945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.019100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.019126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.019332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.019357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.019532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.019557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.019746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.019776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.019960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.019987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.020171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.020197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.020378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.020403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.020584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.020609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.020754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.020780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.020951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.020978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.021153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.021184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.021354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.021379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.021567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.021593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.021744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.021770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.021920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.021947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.022129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-16 01:02:12.022156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-16 01:02:12.022319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.022346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.022540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.022566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.022753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.022779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.022936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.022962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.023146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.023171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.023372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.023398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.023586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.023611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.023790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.023814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.023967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.023993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.024174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.024207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.024427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.024452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.024656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.024682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.024832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.024858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.025045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.025070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.025258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.025283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.025429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.025456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.025637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.025663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.025870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.025900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.026052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.026077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.026266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.026291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.026477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.026502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.026680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.026705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.026887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.026913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.027069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.027095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.027300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.027325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.027530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.027556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.027705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.027731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.027904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.027936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.028114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.028139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.028296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.028320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.028472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.028497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.028651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.028676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-16 01:02:12.028864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-16 01:02:12.028903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.029062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.029088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.029254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.029279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.029464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.029490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.029653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.029678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.029833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.029859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.030045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.030071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.030221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.030248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.030390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.030416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.030573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.030598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.030780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.030813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.031004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.031030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.031178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.031204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.031352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.031378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.031548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.031574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.031748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.031773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.031959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.031986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.032127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.032153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.032343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.032368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.032539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.032565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.032718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.032743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.032925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.032952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.033141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.033167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.033323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.033348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.033549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.033574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.033750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.033777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.033965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.033991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.034145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.034170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.034354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.034381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.034533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.034559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.034733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.034758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.034909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.034935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.035089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.035114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.035295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.035320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.035502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.035527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.035702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.035731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.035908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.035934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.036109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.036135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.036287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.036312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.036499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.036525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.036676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.036708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-16 01:02:12.036880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-16 01:02:12.036907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.037052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.037077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.037252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.037278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.037456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.037481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.037626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.037653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.037829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.037855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.038012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.038040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.038194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.038220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.038436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.038462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.038642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.038667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.038845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.038871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.039021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.039047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.039225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.039251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.039432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.039457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.039636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.039661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.039847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.039872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.040027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.040053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.040217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.040243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.040391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.040416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.040599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.040625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.040764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.040790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.040989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.041029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.041192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.041230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-16 01:02:12.041428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-16 01:02:12.041454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.041634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.041660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.041838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.041865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.042023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.042051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.042232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.042258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.042454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.042480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.042692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.042717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.042895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.042921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.043103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.043128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.043332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.043357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.043574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.043600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.043806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.043837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.044036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.044062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.044218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.044244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.044429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.044455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.044636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.044663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.044836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.044862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.045021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.045047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.045202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.045228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.045406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.045433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.045611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.045637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.045820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.045845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.046015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.046042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.046227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.046253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.046416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.046441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.046625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.046650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.046828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.046854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.047044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.047070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.047225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.047251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.047402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.047428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.047588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.047615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.047792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.047818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.048013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.048039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.048190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.048215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.048361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.048386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.048569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.048600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.048792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.048819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.049027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.049055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.049228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.049278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.049482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.049509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.049686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.049712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-16 01:02:12.049893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-16 01:02:12.049925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.050076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.050102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.050118] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.413 [2024-07-16 01:02:12.050299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.050324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.050482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.050513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.050685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.050711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.050907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.050933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.051112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.051137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.051337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.051363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.051567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.051592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.051749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.051774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.051999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.052039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.052207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.052234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.052427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.052452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.052627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.052652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.052692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9813f0 (9): Bad file descriptor 00:25:37.413 [2024-07-16 01:02:12.052921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.052960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.053128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.053155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.053316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.053341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.053496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.053524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.053684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.053710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.053917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.053943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.054122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.054147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.054298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.054323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.054475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.054501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.054725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.054751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.054914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.054941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.055124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.055150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.055313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.055340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.055490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.055516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.055690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.055715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.055924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.055950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.056106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.056132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.056284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.056309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.056461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.056487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.056688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.056713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-16 01:02:12.056892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-16 01:02:12.056918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.057073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.057099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.057288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.057318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.057609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.057634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.057794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.057820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.058036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.058061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.058210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.058236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.058420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.058445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.058634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.058659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.058812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.058837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.059021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.059047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.059272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.059298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.059502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.059528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.059686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.059713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.059895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.059921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.060074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.060099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.060298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.060324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.060513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.060539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.060717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.060742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.060943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.060970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.061152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.061177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.061336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.061361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.061579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.061604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.061790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.061815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.061967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.061993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.062173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.062198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.062385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.062411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.062615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.062641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.062820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.062845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.063092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.063134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.063339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.063368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.063567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.063595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.063809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.063836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.064010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.064037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.064197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-16 01:02:12.064223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-16 01:02:12.064397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.064423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.064601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.064626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.064807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.064833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.065038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.065065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.065250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.065276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.065432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.065460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.065649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.065678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.065860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.065901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.066111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.066136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.066340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.066366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.066539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.066568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.066775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.066801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.066984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.067011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.067193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.067219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.067403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.067430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.067612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.067638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.067849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.067888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.068047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.068073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.068252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.068285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.068469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.068496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.068686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.068712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.068883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.068911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.069067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.069094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.069307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.069333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.069527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.069553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.069735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.069761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.069939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.069967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.070126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.070152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.070341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.070366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.070520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.070546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.070699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.070724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.070924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.070951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.071098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.071123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.071306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.071339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.071497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.071524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.071704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.071730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.071899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.071926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.072106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.072131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.072314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.072342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-16 01:02:12.072542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-16 01:02:12.072569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.072719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.072744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.072889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.072915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.073099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.073125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.073301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.073326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.073503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.073528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.073739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.073765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.073952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.073979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.074130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.074160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.074316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.074341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.074529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.074555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.074710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.074735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.074899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.075093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.075120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.075303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.075330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.075485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.075522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.075673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.075699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.075881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.075907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.076084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.076110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.076299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.076337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.076542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.076568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.076774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.076800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.077008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.077036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.077355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.077380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.077565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.077590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.077740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.077765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.077918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.077944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.078123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.078149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.078340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.078365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.078567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.078602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.078754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.078779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.078935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.078963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.079117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.079144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.079303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-16 01:02:12.079329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-16 01:02:12.079489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.079517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.079698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.079724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.079891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.079917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.080074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.080101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.080274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.080300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.080488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.080514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.080688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.080714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.080900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.080935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.081109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.081135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.081287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.081312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.081490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.081516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.081675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.081700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.081897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.081923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.082129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.082154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.082341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.082372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.082550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.082576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.082756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.082782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.082995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.083022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.083202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.083228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.083384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.083420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.083595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.083621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.083803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.083828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.084013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.084039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.084222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.084248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.084406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.084433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.084577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.084602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.084764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.084790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.084932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.084958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.085187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.085214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.085394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.085420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.085571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.085597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.085766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.085791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.085939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.085965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.086120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.086146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.086332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.086357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.086540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.086564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.086717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.086745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.086896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.086923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.087102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.087128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.087290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.087315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-16 01:02:12.087465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-16 01:02:12.087492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.087676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.087706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.087851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.087888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.088071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.088096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.088312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.088338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.088500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.088527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.088675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.088702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.088883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.088910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.089062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.089088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.089231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.089262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.089416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.089441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.089594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.089619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.089826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.089852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.090040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.090067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.090253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.090278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.090488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.090515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.090696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.090730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.090913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.090941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.091086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.091112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.091274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.091300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.091484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.091510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.091662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.091689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.091899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.091926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.092098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.092124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.092312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.092339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.092516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.092541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.092725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.092751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.092932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.092959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.093144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.093171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.093353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.093381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.093533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.093560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.093745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.093771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.093964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.093991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.094151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.094177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.094339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.094366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.094521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.094546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.094728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.094753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.094916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.094942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.095096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.095122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.095304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.095329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.095498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.095532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-16 01:02:12.095711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-16 01:02:12.095746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.095894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.095920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.096076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.096101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.096245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.096275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.096459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.096486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.096644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.096669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.096850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.096881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.097050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.097076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.097253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.097281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.097452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.097478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.097628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.097653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.097809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.097835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.098037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.098063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.098242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.098268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.098450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.098476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.098656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.098682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.098868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.098911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.099092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.099118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.099304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.099330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.099474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.099500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.099680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.099706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.099888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.099914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.100098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.100124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.100335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.100360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.100536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.100562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.100708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.100735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.100912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.100938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.101090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.101116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.101293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.101319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.101503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.101529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.101683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.101709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.101853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.101883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.102056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.102082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.102261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.102287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.102459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.102485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.102668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.102694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.102836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.102861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.103072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.103098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.103272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.103297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.103447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.103472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.103611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.103641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-16 01:02:12.103825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-16 01:02:12.103851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.104028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.104054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.104206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.104231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.104402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.104427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.104596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.104621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.104825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.104850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.105013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.105039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.105200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.105226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.105418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.105445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.105646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.105672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.105846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.105871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.106065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.106091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.106285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.106309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.106466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.106491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.106664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.106689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.106836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.106861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.107028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.107053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.107239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.107265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.107439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.107464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.107611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.107638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.107822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.107848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.108037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.108063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.108241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.108267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.108420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.108625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.108651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.108829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.108854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.109050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.109076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.109233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.109258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.109431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.109456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.109629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.109655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.109798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.109823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.109985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.110011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.110168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.110203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.110415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.110440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.110583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.110608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-16 01:02:12.110898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-16 01:02:12.110925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.111106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.111132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.111292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.111317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.111457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.111483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.111654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.111683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.111948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.111975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.112151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.112176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.112352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.112377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.112555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.112580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.112735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.112760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.112936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.112961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.113114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.113144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.113419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.113445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.113598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.113623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.113832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.113857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.114106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.114132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.114315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.114341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.114514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.114540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.114717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.114742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.114923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.114949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.115128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.115154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.115337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.115363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.115545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.115570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.115741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.115765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.115959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.115985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.116190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.116216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.116374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.116399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.116607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.116632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.116838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.116864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.117029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.117054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.117207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.117232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.117434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.117460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.117620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.117645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.117834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.117868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.118057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.118082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.118336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.118361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.118514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.118541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.118724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.118750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.118940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.118967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.119179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.119205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.119352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.119378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-16 01:02:12.119571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-16 01:02:12.119596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.119759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.119784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.119954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.119989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.120173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.120202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.120366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.120393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.120571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.120602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.120753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.120779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.120929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.120954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.121123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.121150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-16 01:02:12.121313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-16 01:02:12.121338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.707 [2024-07-16 01:02:12.121553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.707 [2024-07-16 01:02:12.121579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.707 qpair failed and we were unable to recover it. 00:25:37.707 [2024-07-16 01:02:12.121729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.707 [2024-07-16 01:02:12.121755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.707 qpair failed and we were unable to recover it. 00:25:37.707 [2024-07-16 01:02:12.121937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.707 [2024-07-16 01:02:12.121963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.707 qpair failed and we were unable to recover it. 00:25:37.707 [2024-07-16 01:02:12.122135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.122161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.122357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.122382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.122525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.122550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.122753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.122778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.122934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.122960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.123100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.123126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.123295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.123321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.123490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.123515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.123699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.123724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.123898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.123924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.124079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.124105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.124254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.124281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.124480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.124506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.124652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.124677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.124853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.124893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.125045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.125071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.125243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.125269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.125445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.125471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.125644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.125670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.125842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.125867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.126033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.126060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.126244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.126271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.126421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.126446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.126648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.126674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.126828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.126853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.127035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.127078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.127274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.127302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.127455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.127483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.127662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.127688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.127888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.127915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.128066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.128098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.128330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.128357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.128514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.128539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.128715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.128742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.128909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.128939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.129122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.129149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.129353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.129378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.129562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.129588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.129795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.129821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.129979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.130006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.708 [2024-07-16 01:02:12.130187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.708 [2024-07-16 01:02:12.130213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.708 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.130427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.130452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.130627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.130653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.130842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.130870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.131085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.131111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.131295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.131320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.131468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.131495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.131651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.131676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.131857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.131895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.132074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.132100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.132289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.132314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.132487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.132513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.132688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.132713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.132865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.132897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.133078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.133104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.133295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.133321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.133475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.133500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.133661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.133688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.133888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.133929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.134117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.134144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.134329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.134356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.134563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.134590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.134737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.134763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.134947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.134975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.135158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.135195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.135340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.135366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.135545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.135572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.135755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.135784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.135973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.136000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.136175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.136200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.136374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.136405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.136559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.709 [2024-07-16 01:02:12.136585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.709 qpair failed and we were unable to recover it. 00:25:37.709 [2024-07-16 01:02:12.136765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.136792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.136958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.136987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.137145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.137172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.137320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.137346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.137548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.137573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.137719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.137744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.137897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.137923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.138101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.138127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.138308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.138335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.138483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.138509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.138654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.138680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.138861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.138891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.139049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.139074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.139254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.139281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.139432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.139459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.139662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.139688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.139842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.139870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.140054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.140081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.140264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.140290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.140465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.140491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.140668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.140696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.140850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.140881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.141037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.141064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.141250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.141276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.141454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.141480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.141643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.141670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.141843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.141881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.142058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.142084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.142236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.142262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.142411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.142436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.142591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.142617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.142824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.142851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.143038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.143064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.143272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.143298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.143501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.143527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.143701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.143727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.143904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.143931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.144112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.144139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.144293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.710 [2024-07-16 01:02:12.144323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.710 qpair failed and we were unable to recover it. 00:25:37.710 [2024-07-16 01:02:12.144531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.144715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.144743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.144929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.144955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.145158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.145184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.145387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.145412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.145616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.145642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.145819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.145846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.146000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.146027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.146202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.146228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.146407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.146433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.146579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.146604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.146746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.146772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.146974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.147000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.147189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.147215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.147364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.147390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.147546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.147571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.147767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.147807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.147962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.147991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.148169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.148196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.148370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.148397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.148599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.148626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.148774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.148802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.149017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.149044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.149228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.149255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.149465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.149491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.149695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.149721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.149907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.149935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.150082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.150108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.150259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.150286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.150463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.150489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.150696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.150723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.150882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.711 [2024-07-16 01:02:12.150909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.711 qpair failed and we were unable to recover it. 00:25:37.711 [2024-07-16 01:02:12.151062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.151088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.151272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.151297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.151477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.151503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.151677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.151702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.151908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.151936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.152118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.152145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.152322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.152348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.152498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.152529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.152735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.152762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.152967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.152994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.153170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.153197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.153406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.153434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.153579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.153606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.153751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.153777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.153953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.153979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.154133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.154159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.154335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.154361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.154538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.154564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.154765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.154790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.154952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.154978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.155184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.155210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.155421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.155447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.155626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.155652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.155828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.155854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.156063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.156089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.156250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.156276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.156452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.156477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.156629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.156654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.156836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.156862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.157041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.157066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.157271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.157297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.712 [2024-07-16 01:02:12.157473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.712 [2024-07-16 01:02:12.157499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.712 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.157705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.157731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.157911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.157939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.158120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.158146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.158303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.158329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.158498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.158523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.158699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.158725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.158870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.158907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.159114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.159140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.159300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.159325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.159500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.159525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.159728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.159753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.159912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.159938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.160149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.160175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.160322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.160347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.160498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.160524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.160678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.160707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.160886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.160912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.161095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.161121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.161301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.161327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.161533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.161558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.161711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.161737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.161919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.161945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.162150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.162175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.162323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.162349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.162554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.162579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.162756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.162783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.162987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.163013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.163161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.163187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.163340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.163366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.163545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.163571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.163744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.163769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.713 [2024-07-16 01:02:12.163944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.713 [2024-07-16 01:02:12.163970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.713 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.164142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.164168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.164357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.164383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.164587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.164613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.164785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.164810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.164995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.165021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.165173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.165198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.165371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.165398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.165572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.165597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.165744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.165769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.165923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.165949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.166113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.166153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.166310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.166338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.166516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.166691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.166717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.166891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.166932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.167103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.167142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.167296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.167323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.167377] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.714 [2024-07-16 01:02:12.167415] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.714 [2024-07-16 01:02:12.167431] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.714 [2024-07-16 01:02:12.167444] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.714 [2024-07-16 01:02:12.167454] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.714 [2024-07-16 01:02:12.167473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.167498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.167515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:37.714 [2024-07-16 01:02:12.167591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:37.714 [2024-07-16 01:02:12.167679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.167725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.167637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:37.714 [2024-07-16 01:02:12.167641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:37.714 [2024-07-16 01:02:12.167911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.167937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.168118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.168150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.168421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.168446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.168636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.168661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.168844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.168869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.169046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.169072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.169236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.169261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.169419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.169444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.169599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.169626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.169782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.169807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.170058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.170084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.170231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.170256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.170393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.170418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.170593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.170618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.170780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.170805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.170987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.171013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.171155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.171180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.714 qpair failed and we were unable to recover it. 00:25:37.714 [2024-07-16 01:02:12.171355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.714 [2024-07-16 01:02:12.171380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.171517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.171542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.171710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.171735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.171921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.171946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.172205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.172229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.172407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.172431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.172585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.172610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.172760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.172784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.172961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.172987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.173211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.173236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.173390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.173414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.173552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.173577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.173835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.173860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.174025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.174050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.174231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.174256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.174406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.174430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.174599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.174624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.174768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.174793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.174972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.174998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.175147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.175172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.175365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.175390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.175558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.175582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.175786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.175811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.175987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.176012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.176166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.176192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.176376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.176401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.176547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.176572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.176712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.176737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.176895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.176921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.177067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.177092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.177242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.177266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.177409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.715 [2024-07-16 01:02:12.177434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.715 qpair failed and we were unable to recover it. 00:25:37.715 [2024-07-16 01:02:12.177627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.177651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.177804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.177830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.178007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.178048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.178202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.178229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.178375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.178401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.178590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.178615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.178786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.178812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.178959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.178984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.179142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.179167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.179314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.179339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.179496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.179523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.179669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.179696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.179860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.179891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.180033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.180058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.180246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.180271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.180418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.180443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.180630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.180655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.180833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.180858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.181009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.181034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.181189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.181214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.181363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.181392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.181569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.181594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.181744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.181769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.182037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.182062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.182218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.182243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.182421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.182446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.182604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.182628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.182772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.182796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.182959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.182985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.183166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.183191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.183345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.183370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.183527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.183551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.183723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.183747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.183898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.183923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.184097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.184122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.184270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.184295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.184466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.184491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.184643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.184669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.184810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.716 [2024-07-16 01:02:12.184834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.716 qpair failed and we were unable to recover it. 00:25:37.716 [2024-07-16 01:02:12.184998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.185023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.185168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.185193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.185370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.185395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.185563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.185588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.185751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.185776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.185956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.185982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.186137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.186162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.186354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.186378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.186544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.186573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.186710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.186911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.186937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.187094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.187119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.187280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.187305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.187448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.187473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.187626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.187650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.187809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.187834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.188005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.188031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.188219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.188245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.188397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.717 [2024-07-16 01:02:12.188422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.717 qpair failed and we were unable to recover it. 00:25:37.717 [2024-07-16 01:02:12.188627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.188652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.188828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.188853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.189007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.189032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.189215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.189242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.189393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.189418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.189599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.189624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.189796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.189820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.189982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.190007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.190193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.190218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.190394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.190418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.190599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.190623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.190797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.190822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.191001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.191026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.191212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.191237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.191382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.191407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.191581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.191606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.191751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.191777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.191964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.191990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.192134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.192158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.192311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.192336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.192506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.192531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.192698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.718 [2024-07-16 01:02:12.192723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.718 qpair failed and we were unable to recover it. 00:25:37.718 [2024-07-16 01:02:12.192906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.192931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.193075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.193099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.193240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.193265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.193452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.193476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.193655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.193679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.193833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.193859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.194069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.194111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.194294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.194321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.194467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.194499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.194643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.194668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.194817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.194842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.195049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.195075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.195234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.195259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.195440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.195465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.195651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.195677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.195852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.195884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.196079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.196104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.196250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.196275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.196465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.196490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.196638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.196663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.196846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.196871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.197082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.197107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.197328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.197353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.197496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.197521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.197675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.197700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.197874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.197903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.198068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.198093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.198283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.198307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.198461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.198485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.198634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.198659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.198835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.198860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.199145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.199170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.199313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.199338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.199545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.199569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.199708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.199733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.199905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.199934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.200117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.200142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.200283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.200308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.200506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.200531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.200712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.200736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.200908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.719 [2024-07-16 01:02:12.200933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.719 qpair failed and we were unable to recover it. 00:25:37.719 [2024-07-16 01:02:12.201166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.201191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.201334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.201358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.201502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.201527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.201681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.201705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.201853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.201882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.202023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.202047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.202197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.202222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.202371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.202395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.202608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.202633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.202788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.202813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.202960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.202986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.203143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.203374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.203398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.203573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.203598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.203771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.203796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.203949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.203974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.204153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.204178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.204330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.204355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.204497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.204522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.204692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.204717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.204898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.204924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.205071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.205100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.205290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.205315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.205493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.205518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.205663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.205688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.205869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.205914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.206090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.206115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.206285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.206310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.206456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.206481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.206675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.206700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.206858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.206888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.207062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.207087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.207243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.207268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.207420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.207445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.207590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.207614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.207801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.207826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.207989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.208015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.208168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.208193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.208334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.208359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.208508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.208532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.720 [2024-07-16 01:02:12.208675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.720 [2024-07-16 01:02:12.208700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.720 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.208906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.208941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.209102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.209127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.209331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.209356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.209511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.209536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.209681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.209706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.209864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.209895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.210069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.210094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.210248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.210276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.210446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.210472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.210624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.210649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.210832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.210857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.211042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.211068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.211220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.211245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.211391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.211416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.211591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.211615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.211793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.211817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.211970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.211996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.212147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.212172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.212324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.212348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.212522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.212546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.212719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.212743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.212921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.212951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.213104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.213129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.213270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.213296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.213440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.213466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.213644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.213669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.213901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.213927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.214098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.214122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.214303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.214329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.214502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.214527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.214701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.214725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.214897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.214922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.215086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.215110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.215289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.215314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.215465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.215489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.215675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.215700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.215909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.215935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.216095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.216119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.216291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.216316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.216464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.216488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.216662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.721 [2024-07-16 01:02:12.216687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.721 qpair failed and we were unable to recover it. 00:25:37.721 [2024-07-16 01:02:12.216842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.216866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.217017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.217042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.217191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.217217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.217493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.217517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.217706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.217730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.217900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.217925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.218071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.218095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.218264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.218288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.218473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.218498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.218779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.218803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.218977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.219002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.219148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.219173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.219346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.219370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.219521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.219545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.219727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.219752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.219923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.219948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.220123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.220147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.220310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.220335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.220592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.220617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.220797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.220822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.220985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.221010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.221159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.221184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.221357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.221382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.221572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.221596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.221799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.221824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.221978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.222003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.222168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.222192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.222356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.222381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.222543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.222567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.222718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.222742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.222890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.222915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.223095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.223121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.223280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.223304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.223473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.223497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.223659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.223684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.722 [2024-07-16 01:02:12.223841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.722 [2024-07-16 01:02:12.223866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.722 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.224032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.224057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.224205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.224229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.224403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.224428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.224584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.224608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.224779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.224804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.224960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.224986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.225178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.225202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.225387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.225411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.225576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.225601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.225796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.225821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.225971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.225996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.226141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.226166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.226337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.226366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.226544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.226569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.226746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.226771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.226947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.226972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.227143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.227168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.227331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.227356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.227527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.227552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.227708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.227732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.227874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.227903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.228086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.228110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.228290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.228314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.228496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.228520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.228676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.228700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.228848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.228873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.229104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.229129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.229318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.229343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.229494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.229519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.229666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.229691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.229866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.229899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.230070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.230095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.230284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.230308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.230483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.230508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.230782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.230806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.230987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.231012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.231186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.231211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.231444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.231469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.231621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.231646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.231795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.231820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.231986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.723 [2024-07-16 01:02:12.232012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.723 qpair failed and we were unable to recover it. 00:25:37.723 [2024-07-16 01:02:12.232159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.232184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.232359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.232383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.232579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.232604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.232761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.232785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.232932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.232957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.233142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.233166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.233312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.233337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.233514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.233539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.233683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.233707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.233901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.233926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.234104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.234128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.234291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.234315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.234467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.234496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.234684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.234709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.234902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.234927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.235072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.235097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.235252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.235276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.235445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.235470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.235660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.235684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.235952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.235977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.236153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.236177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.236341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.236366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.236551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.236576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.236733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.236757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.236908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.236934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.237106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.237131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.237284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.237308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.237479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.237504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.237676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.237700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.237851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.237882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.238037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.238062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.238229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.238254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.238464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.238489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.238648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.238672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.238930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.238956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.239115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.239139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.239312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.239336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.239489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.239514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.239656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.239681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.239840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.239869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.240060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.240085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.724 [2024-07-16 01:02:12.240265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.724 [2024-07-16 01:02:12.240290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.724 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.240542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.240567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.240739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.240763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.240967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.240993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.241172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.241197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.241369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.241394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.241551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.241576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.241756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.241781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.241936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.241961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.242159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.242186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.242359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.242384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.242539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.242565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.242743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.242768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.242930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.242956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.243105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.243130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.243291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.243316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.243467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.243491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.243641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.243666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.243906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.243932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.244096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.244120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.244324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.244348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.244522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.244546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.244698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.244722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.244915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.244940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.245239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.245263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.245454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.245479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.245631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.245656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.245827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.245851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.246008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.246035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.246213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.246237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.246492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.246517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.246670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.246694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.246842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.246867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.247040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.247065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.247234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.247259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.247425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.247449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.247599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.247624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.247764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.247789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.247965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.247991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.248133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.248162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.248328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.248352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.248509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.248534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.725 qpair failed and we were unable to recover it. 00:25:37.725 [2024-07-16 01:02:12.248675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.725 [2024-07-16 01:02:12.248700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.248899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.248934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.249137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.249162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.249307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.249332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.249525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.249549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.249754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.249778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.249953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.249978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.250126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.250150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.250302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.250326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.250490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.250515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.250691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.250715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.250894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.250919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.251061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.251086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.251261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.251285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.251460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.251484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.251645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.251669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.251859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.251890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.252081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.252106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.252281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.252305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.252488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.252513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.252674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.252699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.252865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.252896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.253081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.253106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.253293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.253319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.253496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.253524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.253672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.253696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.253840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.253864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.254034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.254058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.254246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.254271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.254410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.254435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.254592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.254617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.254776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.254800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.255003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.255028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.255176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.255200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.255371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.255396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.255568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.255592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.255735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.255760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.726 [2024-07-16 01:02:12.255912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.726 [2024-07-16 01:02:12.255937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9733f0 with addr=10.0.0.2, port=4420 00:25:37.726 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.256119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.256162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.256321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.256348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.256545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.256571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.256764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.256789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.256977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.257005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.257171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.257197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.257341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.257366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.257539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.257564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.257773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.257798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.257992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.258019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.258175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.258202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.258384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.258410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.258571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.258596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.258749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.258780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.258931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.258958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.259128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.259153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.259328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.259353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.259500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.259527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.259675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.259701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.259882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.259907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.260064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.260089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.260264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.260289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.260495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.260520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.260666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.260692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.260871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.260902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.261079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.261106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.261264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.261290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.261475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.261501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.261648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.261673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.261812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.261838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.262036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.262062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.262239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.262264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.262407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.262432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.262603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.262628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.262827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.262852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.263021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.263046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.263187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.263212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.263387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.263412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.727 [2024-07-16 01:02:12.263557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.727 [2024-07-16 01:02:12.263582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.727 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.263755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.263780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.263936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.263963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.264117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.264141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.264283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.264308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.264463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.264487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.264656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.264681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.264838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.264863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.265049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.265074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.265223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.265248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.265421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.265446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.265603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.265628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.265773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.265797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.265946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.265972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.266134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.266159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.266325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.266353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.266531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.266556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.266737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.266762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.266917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.266942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.267123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.267150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.267326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.267352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.267535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.267561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.267698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.267723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.267887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.267913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.268070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.268095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.268247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.268272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.268435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.268460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.268668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.268693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.268866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.268898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.269061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.269086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.269244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.269269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.269449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.269474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.269620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.269645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.269793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.269817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.270000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.270026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.270184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.270209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.270386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.270411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.270559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.270584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.270753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.270778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.270944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.270970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.271118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.271144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.728 [2024-07-16 01:02:12.271296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.728 [2024-07-16 01:02:12.271321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.728 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.271507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.271532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.271675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.271701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.271853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.271885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.272037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.272063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.272229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.272254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.272400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.272426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.272567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.272593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.272768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.272794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.272941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.272967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.273115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.273141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.273315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.273340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.273504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.273529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.273673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.273699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.273869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.273904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.274099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.274125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.274290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.274316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.274473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.274498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.274752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.274778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.274955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.274981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.275141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.275166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.275312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.275338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.275498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.275524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.275680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.275705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.275873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.275903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.276052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.276079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.276244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.276270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.276442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.276467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.276649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.276674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.276825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.276851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.277020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.277046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.277202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.277227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.277368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.277393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.277535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.277559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.277710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.277735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.277938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.277963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.278145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.278170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.278314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.278339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.278487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.278512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.278688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.278712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.729 [2024-07-16 01:02:12.278860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.729 [2024-07-16 01:02:12.278891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.729 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.279074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.279100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.279265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.279289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.279430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.279455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.279631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.279656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.279837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.279862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.280009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.280035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.280191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.280216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.280384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.280410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.280559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.280584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.280774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.280799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.280952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.280979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.281125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.281150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.281335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.281360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.281550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.281579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.281728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.281753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.281928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.281954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.282134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.282159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.282340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.282365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.282530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.282555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.282718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.282744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.282897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.282924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.283106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.283131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.283280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.283305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.283479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.283504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.283665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.283689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.283836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.283861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.284021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.284047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.284224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.284250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.284439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.284464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.284620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.284645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.284831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.284856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.285046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.285089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.285268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.285296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.285474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.285500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.285679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.285705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.285867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.285903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.286084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.286110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.286265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.286292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.286439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.286465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.730 [2024-07-16 01:02:12.286644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.730 [2024-07-16 01:02:12.286670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.730 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.286845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.286873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.287040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.287065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.287241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.287266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.287428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.287454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.287618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.287644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.287803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.287828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.287997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.288023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.288184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.288209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.288351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.288376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.288530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.288555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.288701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.288726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.288887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.288912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.289086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.289111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.289262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.289292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.289446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.289471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.289615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.289640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.289795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.289820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.289989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.290015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.290165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.290192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.290362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.290387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.290530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.290556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.290730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.290756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.290934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.290960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.291104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.291131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.291281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.291306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.291446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.291472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.291621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.291647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.291793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.291818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.292010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.292036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.292182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.292209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.292351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.292377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.292545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.292571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.292717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.292742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.292909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.731 [2024-07-16 01:02:12.292950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.731 qpair failed and we were unable to recover it. 00:25:37.731 [2024-07-16 01:02:12.293116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.293143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.293321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.293348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.293492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.293518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.293690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.293715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.293886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.293912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.294089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.294115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.294289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.294320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.294476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.294503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.294677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.294703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.294918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.294945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.295119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.295145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.295326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.295354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.295521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.295547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.295699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.295726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.295891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.295917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.296072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.296098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.296291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.296316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.296489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.296514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.296656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.296681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.296828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.296853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.297044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.297083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.297234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.297261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.297428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.297454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.297636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.297661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.297825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.297851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.298014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.298040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.298191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.298216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.298390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.298416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.298594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.298620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.298815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.298843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.298994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.299020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.299167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.299192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.299338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.299363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.299524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.299549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.299710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.299735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.299880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.299907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.300056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.300081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.300270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.300295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.300444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.300469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.300644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.732 [2024-07-16 01:02:12.300669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.732 qpair failed and we were unable to recover it. 00:25:37.732 [2024-07-16 01:02:12.300828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.300853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.301023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.301063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.301217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.301244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.301403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.301429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.301573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.301599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.301753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.301780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.301953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.301985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.302158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.302183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.302327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.302352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.302512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.302537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.302719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.302744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.302915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.302942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.303118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.303143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.303327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.303352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.303494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.303519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.303669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.303696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.303848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.303881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.304050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.304075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.304222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.304247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.304422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.304448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.304599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.304624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.304808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.304834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.305000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.305053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.305242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.305269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.305431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.305456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.305604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.305630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.305811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.305837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.733 [2024-07-16 01:02:12.306028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.306067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.306226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.306252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:37.733 [2024-07-16 01:02:12.306401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.306426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.306577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.306603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.733 [2024-07-16 01:02:12.306782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.306810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.733 [2024-07-16 01:02:12.306982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.307012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.307191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.307218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.733 [2024-07-16 01:02:12.307363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.307389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.307535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.307561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.307709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.307735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.307892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.307920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.308076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.733 [2024-07-16 01:02:12.308102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.733 qpair failed and we were unable to recover it. 00:25:37.733 [2024-07-16 01:02:12.308296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.308321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.308460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.308485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.308637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.308663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.308814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.308841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.309042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.309069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.309247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.309278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.309421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.309447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.309648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.309673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.309825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.309852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.310027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.310053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.310228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.310254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.310428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.310453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.310599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.310624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.310779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.310804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.310961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.310988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.311161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.311188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.311362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.311389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.311570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.311596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.311775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.311801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.311989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.312015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.312177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.312202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.312367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.312393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.312553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.312582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.312744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.312770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.312928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.312954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.313108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.313134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.313288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.313314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.313455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.313480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.313656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.313682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.313836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.313861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.314013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.314040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.314187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.314213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.314392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.314418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.314569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.314594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.314745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.314770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.314934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.314960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.315120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.315146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.315327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.315352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.315491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.315516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.315665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.734 [2024-07-16 01:02:12.315691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.734 qpair failed and we were unable to recover it. 00:25:37.734 [2024-07-16 01:02:12.315856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.315887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.316040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.316066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.316214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.316239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.316380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.316405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.316579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.316605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.316779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.316809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.316961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.316987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.317147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.317174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.317318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.317343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.317536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.317561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.317703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.317728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.317895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.317924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.318101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.318127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.318307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.318333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.318487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.318512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.318653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.318678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.318827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.318853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.319024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.319051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.319199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.319225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.319383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.319410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.319585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.319611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.319784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.319811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.319962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.319990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.320138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.320163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.320366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.320392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.320536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.320561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.320701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.320726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.320907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.320948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.321113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.321142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.321291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.321318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.321463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.321490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.321640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.321667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.321829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.321856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.322014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.322041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.322191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.322217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.735 [2024-07-16 01:02:12.322360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.735 [2024-07-16 01:02:12.322385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.735 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.322531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.322557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.322713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.322738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.322883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.322909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.323080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.323106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.323278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.323303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.323456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.323482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.323651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.323677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.323820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.323846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.324002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.324028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.324202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.324231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.324375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.324402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.324587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.324613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.324818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.324843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.325001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.325026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.325187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.325212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.325384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.325409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.325578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.325603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.325747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.325773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.325931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.325957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.326104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.326131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.326304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.326485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.326511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.326655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.326681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.326871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.326904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.327053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.327078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.327240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.327265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.327407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.327433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.327627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.327652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.327804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.327830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.327978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.328004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.328151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.328176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.328344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.328370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.328533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.328558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.328713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.328738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.328886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.328911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.329058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.329083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.329278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.329303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.329452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.329478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.329633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.329659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.736 [2024-07-16 01:02:12.329821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.736 [2024-07-16 01:02:12.329846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.736 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.329997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.330023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.330182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.330207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.330353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.330379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.330559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.330583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.330737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.330762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.330912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.737 [2024-07-16 01:02:12.330949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.331129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:37.737 [2024-07-16 01:02:12.331156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.331318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.331343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.737 [2024-07-16 01:02:12.331499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.331527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.331669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.331695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.737 [2024-07-16 01:02:12.331887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.331913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.332084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.332110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.332288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.332315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.332465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.332491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.332678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.332703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.332886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.332912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.333081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.333106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.333324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.333348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.333493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.333518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.333696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.333721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.333894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.333934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.334096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.334124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.334263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.334289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.334447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.334475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.334615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.334640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.334791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.334817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.334990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.335018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.335171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.335197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.335344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.335369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.335514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.335540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.335710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.335736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.335882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.335909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.336067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.336093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.336256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.336282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.336460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.336491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.336639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.336664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.336849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.336874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.337030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.337055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.737 qpair failed and we were unable to recover it. 00:25:37.737 [2024-07-16 01:02:12.337231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.737 [2024-07-16 01:02:12.337257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.337408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.337435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.337603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.337628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.337774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.337800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.337969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.337995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.338159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.338186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.338336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.338362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.338539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.338565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.338712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.338737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.339029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.339055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.339204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.339229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.339412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.339438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.339591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.339619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.339775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.339802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.339977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.340003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.340155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.340181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.340325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.340351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.340521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.340546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.340698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.340725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.340934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.340961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.341106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.341132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.341433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.341458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.341649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.341674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.341833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.341859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.342059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.342085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.342238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.342263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.342419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.342444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.342591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.342616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.342779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.342805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.342952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.342979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.343155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.343181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.343454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.343479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.343634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.343660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.343828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.343854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.344032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.344057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.344210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.344235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.344407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.344437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.344611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.344637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.344809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.344834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.344988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.345015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.345174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.345210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.738 [2024-07-16 01:02:12.345440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.738 [2024-07-16 01:02:12.345465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.738 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.345646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.345672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.345848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.345874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.346034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.346060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.346232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.346257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.346430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.346455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.346615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.346641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.346861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.346911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.347087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.347114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.347317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.347343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.347494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.347521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.347690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.347716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.347921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.347948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.348098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.348123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.348309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.348334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.348514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.348539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.348715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.348741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.349017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.349043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.349241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.349266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.349451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.349476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.349651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.349676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.349824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.349849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.350061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.350101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.350256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.350283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.350511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.350537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.350714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.350739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.350919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.350956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.351098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.351123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.351285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.351313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.351491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.351517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.351665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.351690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.351868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.351900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.352061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.352088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.352279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.352305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.352473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.352500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-16 01:02:12.352658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-16 01:02:12.352690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.352867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.352898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.353085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.353111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.353342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.353367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.353513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.353539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.353712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.353738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.353905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.353931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.354083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.354108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.354250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.354276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.354453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.354480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.354655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.354681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.354827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.354853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.355065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.355091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.355259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.355285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.355478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.355504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.355699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.355725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.355895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.355922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.356095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.356120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.356289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.356314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.356487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.356513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.356684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.356709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.356855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.356885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.357058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.357084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.357262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.357288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.357465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.357491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.357638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.357663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.357842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.357894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.740 Malloc0 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.358097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.358136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.358317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.358351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.740 [2024-07-16 01:02:12.358510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.358537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:37.740 [2024-07-16 01:02:12.358684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.358710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.740 [2024-07-16 01:02:12.358882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.358909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.359061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.359087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.359252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.359277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.359449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.359474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.359626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.359651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.359827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.359852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.360034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.360061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.360213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-16 01:02:12.360238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-16 01:02:12.360422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.360447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.360622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.360649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.360797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.360823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.360997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.361024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.361197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.361222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.361396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.361422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.361428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.741 [2024-07-16 01:02:12.361590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.361617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.361770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.361796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.361950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.361978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.362159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.362185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.362324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.362349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.362526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.362551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.362697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.362723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.362873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.362903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.363057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.363083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.363231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.363256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.363397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.363423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.363568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.363594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.363776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.363801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.363955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.363982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.364162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.364188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.364361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.364387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.364534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.364561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.364706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.364732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.364883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.364909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.365055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.365080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.365297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.365330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.365485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.365512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.365673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.365699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.365844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.365870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.366041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.366067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.366225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.366250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.366398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.366424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.366607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.366632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.366800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.366825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.366987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.367016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.367163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.367189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.367341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.367366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.367540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.367566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.367721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.367752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-16 01:02:12.367934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-16 01:02:12.367960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.368112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.368137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.368325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.368351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.368518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.368544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.368712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.368738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.368890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.368917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.369098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.369124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.369280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.369306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.369478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.369504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.742 [2024-07-16 01:02:12.369649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.369675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.742 [2024-07-16 01:02:12.369859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.369891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.742 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.742 [2024-07-16 01:02:12.370051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.370077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.370237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.370263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.370432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.370458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.370616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.370641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.370780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.370805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.370961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.370988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.371138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.371164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.371309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.371334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.371505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.371530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.371705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.371731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.371888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.371915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.372063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.372088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.372264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.372289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.372447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.372481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.372656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.372682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.372853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.372883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.373043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.373068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.373222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.373249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.373423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.373448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.373627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.373652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.373808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.373842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.374026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.374054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.374208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.374234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.374408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.374434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.374602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.374628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.374813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.374838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.374992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.375018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.375175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.375201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.375345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-16 01:02:12.375370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-16 01:02:12.375537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.375562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.375702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.375727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.375884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.375910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.376081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.376107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.376284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.376308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.376455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.376480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.376626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.376652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.376822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.376847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.377043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.377070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.377220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.377246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.377439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.377473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.377687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.377725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.743 [2024-07-16 01:02:12.377916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.743 [2024-07-16 01:02:12.377958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.743 [2024-07-16 01:02:12.378142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.378177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.743 [2024-07-16 01:02:12.378378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.378408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.378584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.378610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.378769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.378794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.378954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.378981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.379156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.379181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.379326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.379351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.379503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.379529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.379683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.379710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.379904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.379939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.380090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.380115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.380268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.380295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.380460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.380486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.380634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.380659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.380833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.380858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.381021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.381047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.381200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.381225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.381379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.381405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.381556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.381582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.381728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.381754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.381929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.381955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.382109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.382134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.382301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.382327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.382482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.382507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.382657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.382682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.382828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-16 01:02:12.382853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-16 01:02:12.383037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.383063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.383211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.383236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.383381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.383406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.383565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.383591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.383774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.383799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.383947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.383973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.384126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.384151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.384333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.384358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.384531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.384556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.384732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.384758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.384971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.385001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.385151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.385177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.385373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.385397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.385546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.385584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.744 [2024-07-16 01:02:12.385799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.385834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.744 [2024-07-16 01:02:12.386053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.386087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.744 [2024-07-16 01:02:12.386290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.386328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.386492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.386520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.386667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.386693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.386857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.386890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.387047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.387072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.387252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.387277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.387475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.387501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.387670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.387695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.387896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.387922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.388076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.388103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.388274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.388299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.388472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.388497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.388678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.388704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.388892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.388918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.389057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.389083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.389269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.389294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.389448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.389474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-16 01:02:12.389618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-16 01:02:12.389644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae18000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-16 01:02:12.389681] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.745 [2024-07-16 01:02:12.392225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:37.745 [2024-07-16 01:02:12.392431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:37.745 [2024-07-16 01:02:12.392465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:37.745 [2024-07-16 01:02:12.392481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:37.745 [2024-07-16 01:02:12.392494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:37.745 [2024-07-16 01:02:12.392532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.745 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:37.745 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.745 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.745 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.745 01:02:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2749836 00:25:37.745 [2024-07-16 01:02:12.402132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:37.745 [2024-07-16 01:02:12.402290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:37.745 [2024-07-16 01:02:12.402318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:37.745 [2024-07-16 01:02:12.402333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:37.745 [2024-07-16 01:02:12.402346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:37.745 [2024-07-16 01:02:12.402377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-16 01:02:12.412074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:37.745 [2024-07-16 01:02:12.412227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:37.745 [2024-07-16 01:02:12.412254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:37.745 [2024-07-16 01:02:12.412268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:37.745 [2024-07-16 01:02:12.412281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:37.745 [2024-07-16 01:02:12.412310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-16 01:02:12.422067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:37.745 [2024-07-16 01:02:12.422225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:37.745 [2024-07-16 01:02:12.422251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:37.745 [2024-07-16 01:02:12.422265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:37.745 [2024-07-16 01:02:12.422278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:37.745 [2024-07-16 01:02:12.422308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-16 01:02:12.432168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:37.745 [2024-07-16 01:02:12.432344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:37.745 [2024-07-16 01:02:12.432370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:37.745 [2024-07-16 01:02:12.432384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:37.745 [2024-07-16 01:02:12.432397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:37.745 [2024-07-16 01:02:12.432426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:37.745 qpair failed and we were unable to recover it. 00:25:38.005 [2024-07-16 01:02:12.442068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.005 [2024-07-16 01:02:12.442236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.005 [2024-07-16 01:02:12.442262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.005 [2024-07-16 01:02:12.442276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.005 [2024-07-16 01:02:12.442289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.005 [2024-07-16 01:02:12.442318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.005 qpair failed and we were unable to recover it. 00:25:38.005 [2024-07-16 01:02:12.452122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.005 [2024-07-16 01:02:12.452271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.005 [2024-07-16 01:02:12.452297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.005 [2024-07-16 01:02:12.452312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.005 [2024-07-16 01:02:12.452324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.005 [2024-07-16 01:02:12.452366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.005 qpair failed and we were unable to recover it. 00:25:38.005 [2024-07-16 01:02:12.462211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.005 [2024-07-16 01:02:12.462371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.005 [2024-07-16 01:02:12.462397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.005 [2024-07-16 01:02:12.462412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.005 [2024-07-16 01:02:12.462424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.005 [2024-07-16 01:02:12.462454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.005 qpair failed and we were unable to recover it. 00:25:38.005 [2024-07-16 01:02:12.472214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.005 [2024-07-16 01:02:12.472361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.005 [2024-07-16 01:02:12.472392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.005 [2024-07-16 01:02:12.472408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.005 [2024-07-16 01:02:12.472420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.005 [2024-07-16 01:02:12.472449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.005 qpair failed and we were unable to recover it. 00:25:38.005 [2024-07-16 01:02:12.482158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.005 [2024-07-16 01:02:12.482353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.005 [2024-07-16 01:02:12.482378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.005 [2024-07-16 01:02:12.482392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.005 [2024-07-16 01:02:12.482405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.005 [2024-07-16 01:02:12.482434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.005 qpair failed and we were unable to recover it. 00:25:38.005 [2024-07-16 01:02:12.492179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.005 [2024-07-16 01:02:12.492329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.005 [2024-07-16 01:02:12.492355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.005 [2024-07-16 01:02:12.492369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.005 [2024-07-16 01:02:12.492382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.005 [2024-07-16 01:02:12.492413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.005 qpair failed and we were unable to recover it. 00:25:38.005 [2024-07-16 01:02:12.502211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.005 [2024-07-16 01:02:12.502369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.005 [2024-07-16 01:02:12.502394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.005 [2024-07-16 01:02:12.502408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.502420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.502449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.512345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.512495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.512521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.512536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.512549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.512583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.522337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.522490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.522518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.522532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.522544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.522575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.532359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.532512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.532538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.532552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.532564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.532594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.542353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.542555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.542580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.542595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.542607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.542636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.552397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.552602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.552627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.552641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.552653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.552682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.562403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.562563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.562594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.562609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.562622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.562650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.572457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.572604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.572629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.572644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.572656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.572687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.582482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.582638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.582667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.582682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.582694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.582724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.592585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.592731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.592757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.592771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.592784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.592813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.602521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.602676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.602701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.602715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.602733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.602764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.612576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.612759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.612786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.612800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.612816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.612849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.622616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.006 [2024-07-16 01:02:12.622774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.006 [2024-07-16 01:02:12.622800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.006 [2024-07-16 01:02:12.622814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.006 [2024-07-16 01:02:12.622827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.006 [2024-07-16 01:02:12.622856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.006 qpair failed and we were unable to recover it. 00:25:38.006 [2024-07-16 01:02:12.632634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.632789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.632814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.632828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.632841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.632870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.642650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.642810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.642835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.642850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.642863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.642899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.652658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.652816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.652841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.652855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.652868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.652906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.662669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.662821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.662846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.662860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.662872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.662913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.672730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.672895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.672921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.672935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.672948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.672977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.682781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.682945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.682971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.682985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.682997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.683028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.692768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.692917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.692942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.692968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.692982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.693011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.702773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.702935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.702961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.702975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.702987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.703017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.712802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.712975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.713000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.713015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.713027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.713055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.722849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.723032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.723058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.723072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.723085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.723126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.732897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.733052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.733077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.733091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.733103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.733133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.742906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.743060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.743086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.743100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.743113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.743141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.007 [2024-07-16 01:02:12.752921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.007 [2024-07-16 01:02:12.753069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.007 [2024-07-16 01:02:12.753093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.007 [2024-07-16 01:02:12.753108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.007 [2024-07-16 01:02:12.753120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.007 [2024-07-16 01:02:12.753151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.007 qpair failed and we were unable to recover it. 00:25:38.266 [2024-07-16 01:02:12.762975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.266 [2024-07-16 01:02:12.763125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.266 [2024-07-16 01:02:12.763151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.266 [2024-07-16 01:02:12.763165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.266 [2024-07-16 01:02:12.763178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.266 [2024-07-16 01:02:12.763207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.266 qpair failed and we were unable to recover it. 00:25:38.266 [2024-07-16 01:02:12.773055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.773205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.773230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.773244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.773257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.773286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.783013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.783174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.783198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.783218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.783232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.783264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.793042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.793204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.793229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.793243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.793255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.793284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.803184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.803337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.803362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.803376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.803389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.803418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.813168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.813324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.813349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.813363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.813376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.813405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.823161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.823323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.823349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.823364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.823379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.823409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.833258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.833438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.833463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.833478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.833490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.833520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.843212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.843362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.843387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.843401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.843414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.843442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.853253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.853396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.853420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.853435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.853447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.853477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.863253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.863415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.863440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.863454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.863467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.863496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.873269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.873417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.873448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.873464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.873476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.873507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.883300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.883450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.883475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.883488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.883501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.883531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.893426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.893572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.893597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.893611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.893624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.893652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.903357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.903533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.903558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.903572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.267 [2024-07-16 01:02:12.903585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.267 [2024-07-16 01:02:12.903614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.267 qpair failed and we were unable to recover it. 00:25:38.267 [2024-07-16 01:02:12.913420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.267 [2024-07-16 01:02:12.913572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.267 [2024-07-16 01:02:12.913599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.267 [2024-07-16 01:02:12.913613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.913626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.913664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.923418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.923565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.923591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.923605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.923618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.923661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.933436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.933583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.933608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.933622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.933634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.933662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.943542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.943748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.943775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.943789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.943802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.943832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.953599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.953759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.953786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.953800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.953813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.953842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.963554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.963748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.963781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.963803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.963817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.963848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.973560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.973733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.973759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.973773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.973785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.973814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.983584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.983753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.983778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.983791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.983803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.983831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:12.993611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:12.993773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:12.993799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:12.993813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:12.993826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:12.993856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:13.003632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:13.003792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:13.003817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:13.003831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:13.003850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:13.003888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.268 [2024-07-16 01:02:13.013644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.268 [2024-07-16 01:02:13.013792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.268 [2024-07-16 01:02:13.013817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.268 [2024-07-16 01:02:13.013832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.268 [2024-07-16 01:02:13.013844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.268 [2024-07-16 01:02:13.013874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.268 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.023702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.023862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.023901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.023919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.023932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.023963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.033728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.033885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.033913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.033928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.033941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.033971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.043750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.043914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.043940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.043954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.043967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.043997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.053778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.053936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.053961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.053976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.053988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.054017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.063812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.063986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.064011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.064025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.064037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.064068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.073824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.073991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.074017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.074031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.074043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.074073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.083858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.084014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.084040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.084055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.084068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.084098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.093873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.094037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.094062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.094076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.094094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.094125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.103949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.104110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.104135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.104150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.104163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.104192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.114026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.114181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.114207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.114220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.114233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.114262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.124087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.124281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.124307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.124321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.124334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.124363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.134007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.134179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.134207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.134222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.134234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.134266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.144049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.144220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.144245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.144259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.144272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.144303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.526 [2024-07-16 01:02:13.154052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.526 [2024-07-16 01:02:13.154201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.526 [2024-07-16 01:02:13.154226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.526 [2024-07-16 01:02:13.154240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.526 [2024-07-16 01:02:13.154253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.526 [2024-07-16 01:02:13.154282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.526 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.164098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.164248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.164277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.164291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.164302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.164332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.174212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.174359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.174384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.174398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.174411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.174441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.184311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.184516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.184541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.184561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.184574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.184604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.194225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.194389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.194413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.194427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.194440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.194469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.204352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.204495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.204520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.204534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.204546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.204575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.214284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.214440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.214465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.214480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.214492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.214522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.224326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.224484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.224510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.224524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.224537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.224566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.234306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.234449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.234473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.234488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.234500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.234531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.244338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.244525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.244550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.244565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.244577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.244608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.254328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.254492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.254517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.254531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.254543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.254573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.264364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.264514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.264540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.264554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.264566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.264596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.527 [2024-07-16 01:02:13.274378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.527 [2024-07-16 01:02:13.274531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.527 [2024-07-16 01:02:13.274561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.527 [2024-07-16 01:02:13.274577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.527 [2024-07-16 01:02:13.274589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.527 [2024-07-16 01:02:13.274619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.527 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.284446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.284601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.284626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.284640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.284652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.284682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.294463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.294619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.294647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.294666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.294680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.294711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.304565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.304728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.304753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.304768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.304781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.304812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.314489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.314642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.314667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.314681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.314694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.314729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.324616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.324783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.324809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.324823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.324836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.324866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.334608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.334800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.334825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.334840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.334852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.334889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.344632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.344789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.344814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.344828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.344841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.344870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.354610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.354760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.354785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.354799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.354812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.354841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.364625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.364779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.364809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.364824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.364837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.364866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.374662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.374803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.374828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.374842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.374855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.374890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.384718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.384869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.384901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.384915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.384928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.384956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.394759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.394921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.394947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.394962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.394974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.395004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.404851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.405008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.405034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.405054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.786 [2024-07-16 01:02:13.405068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.786 [2024-07-16 01:02:13.405104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.786 qpair failed and we were unable to recover it. 00:25:38.786 [2024-07-16 01:02:13.414777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.786 [2024-07-16 01:02:13.414930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.786 [2024-07-16 01:02:13.414957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.786 [2024-07-16 01:02:13.414971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.414983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.415014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.424840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.425012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.425037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.425052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.425065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.425094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.434836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.435034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.435060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.435075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.435087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.435116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.444855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.445025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.445052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.445067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.445083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.445115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.454905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.455071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.455096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.455110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.455123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.455153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.465055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.465233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.465259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.465273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.465286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.465315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.475037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.475194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.475219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.475233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.475245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.475275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.484981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.485130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.485156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.485170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.485183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.485213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.495038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.495208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.495233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.495248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.495266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.495295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.505065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.505222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.505248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.505262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.505275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.505305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.515047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.515202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.515227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.515241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.515253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.515285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.525090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.525237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.525262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.525275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.525288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.525318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:38.787 [2024-07-16 01:02:13.535169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:38.787 [2024-07-16 01:02:13.535311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:38.787 [2024-07-16 01:02:13.535335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:38.787 [2024-07-16 01:02:13.535349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:38.787 [2024-07-16 01:02:13.535362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:38.787 [2024-07-16 01:02:13.535390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.787 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.545333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.545541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.545566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.545579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.545592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.545623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.555219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.555374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.555399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.555413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.555426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.555457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.565189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.565341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.565366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.565380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.565393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.565424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.575214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.575365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.575390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.575404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.575417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.575448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.585288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.585443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.585468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.585489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.585502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.585531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.595381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.595532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.595557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.595571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.595584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.595614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.605389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.605556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.605581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.605595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.605607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.605637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.615364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.615566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.615591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.615605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.615618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.615647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.625356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.625509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.625535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.625549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.625561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.625592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.635379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.635529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.635555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.635569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.635582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.635610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.645425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.645575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.645600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.645614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.645627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.645657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.655487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.655673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.655698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.655713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.655725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.655755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.665573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.665730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.665755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.665768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.665781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.665811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.675524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.675679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.675712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.675728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.675740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.675770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.685618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.685766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.685790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.685804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.685817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.047 [2024-07-16 01:02:13.685847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-16 01:02:13.695587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.047 [2024-07-16 01:02:13.695738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.047 [2024-07-16 01:02:13.695763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.047 [2024-07-16 01:02:13.695777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.047 [2024-07-16 01:02:13.695789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.695818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.705629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.705828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.705855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.705869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.705890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.705921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.715729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.715893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.715918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.715932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.715945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.715974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.725695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.725841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.725867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.725891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.725905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.725948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.735756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.735910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.735936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.735950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.735963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.735993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.745696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.745852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.745883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.745900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.745913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.745944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.755756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.755917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.755943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.755957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.755970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.756000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.765775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.765948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.765978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.765993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.766006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.766035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.775798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.775969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.775995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.776009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.776022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.776052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.785856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.786057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.786082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.786096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.786108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.786138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-16 01:02:13.795872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.048 [2024-07-16 01:02:13.796028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.048 [2024-07-16 01:02:13.796054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.048 [2024-07-16 01:02:13.796068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.048 [2024-07-16 01:02:13.796080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.048 [2024-07-16 01:02:13.796111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.307 [2024-07-16 01:02:13.805899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.307 [2024-07-16 01:02:13.806059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.307 [2024-07-16 01:02:13.806085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.307 [2024-07-16 01:02:13.806100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.307 [2024-07-16 01:02:13.806112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.307 [2024-07-16 01:02:13.806148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.307 qpair failed and we were unable to recover it. 00:25:39.307 [2024-07-16 01:02:13.815943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.307 [2024-07-16 01:02:13.816098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.307 [2024-07-16 01:02:13.816124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.307 [2024-07-16 01:02:13.816138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.307 [2024-07-16 01:02:13.816150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.307 [2024-07-16 01:02:13.816179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.307 qpair failed and we were unable to recover it. 00:25:39.307 [2024-07-16 01:02:13.825940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.307 [2024-07-16 01:02:13.826136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.307 [2024-07-16 01:02:13.826162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.307 [2024-07-16 01:02:13.826176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.307 [2024-07-16 01:02:13.826189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.307 [2024-07-16 01:02:13.826219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.307 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.835955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.836120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.836146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.836160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.836173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.836203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.846075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.846234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.846259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.846273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.846286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.846316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.855997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.856144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.856175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.856190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.856202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.856231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.866085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.866237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.866262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.866275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.866288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.866319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.876189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.876338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.876363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.876378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.876390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.876421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.886116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.886272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.886297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.886311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.886324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.886354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.896121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.896286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.896311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.896325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.896343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.896372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.906250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.906408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.906434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.906448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.906460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.906490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.916233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.916380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.916407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.916421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.916433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.916463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.926318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.926482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.926508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.926522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.926534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.926564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.936222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.936369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.936394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.936408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.936420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.936448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.946329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.946507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.946532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.946547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.946559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.946589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.956336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.956491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.956516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.956531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.956544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.956573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.966443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.966595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.966620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.308 [2024-07-16 01:02:13.966634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.308 [2024-07-16 01:02:13.966647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.308 [2024-07-16 01:02:13.966676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.308 qpair failed and we were unable to recover it. 00:25:39.308 [2024-07-16 01:02:13.976391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.308 [2024-07-16 01:02:13.976541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.308 [2024-07-16 01:02:13.976566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:13.976580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:13.976593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:13.976622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:13.986440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:13.986643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:13.986669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:13.986688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:13.986701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:13.986731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:13.996511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:13.996663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:13.996688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:13.996702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:13.996715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:13.996744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:14.006472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:14.006624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:14.006649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:14.006663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:14.006675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:14.006705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:14.016493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:14.016640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:14.016665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:14.016679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:14.016691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:14.016720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:14.026519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:14.026675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:14.026700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:14.026714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:14.026727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:14.026757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:14.036553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:14.036727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:14.036753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:14.036767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:14.036780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:14.036809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:14.046594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:14.046743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:14.046768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:14.046782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:14.046795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:14.046825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.309 [2024-07-16 01:02:14.056684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.309 [2024-07-16 01:02:14.056830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.309 [2024-07-16 01:02:14.056856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.309 [2024-07-16 01:02:14.056870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.309 [2024-07-16 01:02:14.056889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.309 [2024-07-16 01:02:14.056920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.309 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.066662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.066818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.066844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.066858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.066871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.066910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.076743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.076900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.076925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.076945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.076959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.076989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.086682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.086841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.086866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.086887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.086902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.086931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.096704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.096868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.096899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.096921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.096935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.096965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.106840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.107048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.107075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.107093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.107112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.107145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.116795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.116956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.116982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.116996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.117009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.117039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.126843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.127005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.127032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.127050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.127064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.127095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.136912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.137058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.137084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.137098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.137111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.137142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.146971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.147133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.147159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.147173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.147186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.147217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.156901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.157052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.157078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.157098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.157111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.157142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.568 [2024-07-16 01:02:14.166938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.568 [2024-07-16 01:02:14.167098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.568 [2024-07-16 01:02:14.167129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.568 [2024-07-16 01:02:14.167145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.568 [2024-07-16 01:02:14.167157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.568 [2024-07-16 01:02:14.167187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.568 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.176978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.177185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.177212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.177226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.177239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.177269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.186986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.187170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.187196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.187210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.187223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.187253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.197098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.197280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.197311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.197328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.197341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.197371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.207041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.207187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.207213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.207227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.207239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.207276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.217075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.217243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.217268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.217282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.217295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.217326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.227120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.227294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.227320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.227334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.227347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.227377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.237173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.237367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.237393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.237408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.237421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.237452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.247137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.247288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.247317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.247333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.247345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.247375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.257178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.257330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.257361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.257376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.257389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.257419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.267257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.267438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.267463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.267477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.267490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.267519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.277288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.277477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.277502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.277516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.277528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.277559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.287270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.287417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.287442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.287457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.287469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.287499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.297355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.297540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.297566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.297585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.297608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.297640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.307308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.307464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.307490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.569 [2024-07-16 01:02:14.307504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.569 [2024-07-16 01:02:14.307516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.569 [2024-07-16 01:02:14.307547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.569 qpair failed and we were unable to recover it. 00:25:39.569 [2024-07-16 01:02:14.317362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.569 [2024-07-16 01:02:14.317561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.569 [2024-07-16 01:02:14.317587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.570 [2024-07-16 01:02:14.317601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.570 [2024-07-16 01:02:14.317613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.570 [2024-07-16 01:02:14.317643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.570 qpair failed and we were unable to recover it. 00:25:39.827 [2024-07-16 01:02:14.327380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.827 [2024-07-16 01:02:14.327559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.827 [2024-07-16 01:02:14.327584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.827 [2024-07-16 01:02:14.327599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.827 [2024-07-16 01:02:14.327611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.827 [2024-07-16 01:02:14.327640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.827 qpair failed and we were unable to recover it. 00:25:39.827 [2024-07-16 01:02:14.337526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.827 [2024-07-16 01:02:14.337691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.827 [2024-07-16 01:02:14.337717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.827 [2024-07-16 01:02:14.337732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.827 [2024-07-16 01:02:14.337744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.827 [2024-07-16 01:02:14.337773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.827 qpair failed and we were unable to recover it. 00:25:39.827 [2024-07-16 01:02:14.347463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.827 [2024-07-16 01:02:14.347621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.827 [2024-07-16 01:02:14.347647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.827 [2024-07-16 01:02:14.347661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.827 [2024-07-16 01:02:14.347674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.827 [2024-07-16 01:02:14.347703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.827 qpair failed and we were unable to recover it. 00:25:39.827 [2024-07-16 01:02:14.357551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.827 [2024-07-16 01:02:14.357729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.827 [2024-07-16 01:02:14.357755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.827 [2024-07-16 01:02:14.357769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.827 [2024-07-16 01:02:14.357781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.827 [2024-07-16 01:02:14.357812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.827 qpair failed and we were unable to recover it. 00:25:39.827 [2024-07-16 01:02:14.367481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.827 [2024-07-16 01:02:14.367631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.827 [2024-07-16 01:02:14.367657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.827 [2024-07-16 01:02:14.367671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.827 [2024-07-16 01:02:14.367684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.827 [2024-07-16 01:02:14.367712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.827 qpair failed and we were unable to recover it. 00:25:39.827 [2024-07-16 01:02:14.377547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.827 [2024-07-16 01:02:14.377699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.377724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.377739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.377751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.377781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.387569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.387738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.387763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.387783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.387797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.387827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.397602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.397755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.397780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.397793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.397806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.397836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.407616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.407765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.407790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.407804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.407817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.407846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.417698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.417848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.417873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.417895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.417908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.417938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.427724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.427902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.427927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.427942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.427954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.427984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.437731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.437895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.437931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.437948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.437961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.437991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.447716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.447887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.447913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.447927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.447940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.447970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.457785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.457974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.458000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.458014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.458026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.458056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.467821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.467976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.468001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.468015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.468028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.468056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.477799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.477961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.477987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.478010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.478025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.478055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.487825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.487979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.488005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.488019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.488032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.488062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.497871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.498030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.498055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.498070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.498082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.498111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.508013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.508167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.508192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.828 [2024-07-16 01:02:14.508207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.828 [2024-07-16 01:02:14.508220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.828 [2024-07-16 01:02:14.508249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.828 qpair failed and we were unable to recover it. 00:25:39.828 [2024-07-16 01:02:14.517971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.828 [2024-07-16 01:02:14.518131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.828 [2024-07-16 01:02:14.518156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.829 [2024-07-16 01:02:14.518170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.829 [2024-07-16 01:02:14.518183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.829 [2024-07-16 01:02:14.518212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.829 qpair failed and we were unable to recover it. 00:25:39.829 [2024-07-16 01:02:14.528010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.829 [2024-07-16 01:02:14.528162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.829 [2024-07-16 01:02:14.528186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.829 [2024-07-16 01:02:14.528201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.829 [2024-07-16 01:02:14.528214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.829 [2024-07-16 01:02:14.528243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.829 qpair failed and we were unable to recover it. 00:25:39.829 [2024-07-16 01:02:14.537969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.829 [2024-07-16 01:02:14.538120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.829 [2024-07-16 01:02:14.538146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.829 [2024-07-16 01:02:14.538160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.829 [2024-07-16 01:02:14.538173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.829 [2024-07-16 01:02:14.538215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.829 qpair failed and we were unable to recover it. 00:25:39.829 [2024-07-16 01:02:14.547995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.829 [2024-07-16 01:02:14.548198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.829 [2024-07-16 01:02:14.548224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.829 [2024-07-16 01:02:14.548237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.829 [2024-07-16 01:02:14.548251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.829 [2024-07-16 01:02:14.548280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.829 qpair failed and we were unable to recover it. 00:25:39.829 [2024-07-16 01:02:14.558138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.829 [2024-07-16 01:02:14.558309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.829 [2024-07-16 01:02:14.558334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.829 [2024-07-16 01:02:14.558348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.829 [2024-07-16 01:02:14.558360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.829 [2024-07-16 01:02:14.558390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.829 qpair failed and we were unable to recover it. 00:25:39.829 [2024-07-16 01:02:14.568034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.829 [2024-07-16 01:02:14.568203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.829 [2024-07-16 01:02:14.568234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.829 [2024-07-16 01:02:14.568250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.829 [2024-07-16 01:02:14.568262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.829 [2024-07-16 01:02:14.568292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.829 qpair failed and we were unable to recover it. 00:25:39.829 [2024-07-16 01:02:14.578096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:39.829 [2024-07-16 01:02:14.578239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:39.829 [2024-07-16 01:02:14.578265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:39.829 [2024-07-16 01:02:14.578278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:39.829 [2024-07-16 01:02:14.578291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:39.829 [2024-07-16 01:02:14.578322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.829 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.588149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.588307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.588332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.588346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.588359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.588388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.598239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.598403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.598428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.598442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.598455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.598485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.608180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.608336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.608362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.608375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.608388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.608424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.618189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.618347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.618372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.618386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.618399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.618430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.628233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.628412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.628437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.628451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.628464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.628494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.638312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.638466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.638491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.638505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.638518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.638547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.648338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.648528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.648553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.648568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.648580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.648609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.658346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.658497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.658527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.658542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.658555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.658584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.668380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.668571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.668595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.668610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.668622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.668651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.678382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.678532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.678556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.678570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.678583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.678612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.688389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.688536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.688561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.688575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.688587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.688616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.698461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.698609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.698634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.698648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.698665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.698695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.087 [2024-07-16 01:02:14.708493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.087 [2024-07-16 01:02:14.708646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.087 [2024-07-16 01:02:14.708671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.087 [2024-07-16 01:02:14.708685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.087 [2024-07-16 01:02:14.708698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.087 [2024-07-16 01:02:14.708728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.087 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.718489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.718667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.718692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.718706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.718718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.718749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.728606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.728765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.728790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.728805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.728817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.728848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.738586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.738762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.738787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.738801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.738814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.738843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.748597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.748787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.748813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.748827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.748840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.748870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.758618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.758816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.758842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.758856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.758869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.758918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.768687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.768856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.768890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.768907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.768920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.768950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.778664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.778818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.778845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.778859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.778875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.778913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.788708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.788866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.788898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.788913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.788930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.788960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.798733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.798909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.798935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.798949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.798961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.798991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.808764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.808942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.808968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.808982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.808995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.809026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.818808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.818964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.818990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.819003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.819016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.819046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.828850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.829017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.829044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.829059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.829075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.829105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.088 [2024-07-16 01:02:14.838835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.088 [2024-07-16 01:02:14.839003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.088 [2024-07-16 01:02:14.839029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.088 [2024-07-16 01:02:14.839044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.088 [2024-07-16 01:02:14.839057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.088 [2024-07-16 01:02:14.839086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.088 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.848862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.849039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.849065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.849080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.849093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.849122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.858892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.859034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.859059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.859073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.859086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.859115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.868938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.869110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.869135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.869149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.869161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.869190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.878987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.879138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.879164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.879184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.879197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.879226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.888964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.889115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.889141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.889155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.889167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.889197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.898995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.899145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.899170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.899184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.899197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.899226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.909048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.909237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.909262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.909276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.909289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.909317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-16 01:02:14.919139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.347 [2024-07-16 01:02:14.919285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.347 [2024-07-16 01:02:14.919310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.347 [2024-07-16 01:02:14.919324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.347 [2024-07-16 01:02:14.919337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.347 [2024-07-16 01:02:14.919365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.929132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.929330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.929355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.929369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.929382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.929412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.939215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.939375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.939401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.939415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.939427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.939455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.949194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.949361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.949386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.949400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.949413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.949442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.959203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.959355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.959381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.959396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.959408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.959437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.969334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.969480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.969510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.969525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.969538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.969568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.979257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.979409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.979435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.979449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.979462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.979491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.989359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.989527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.989551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.989565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.989577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.989606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:14.999312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:14.999462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:14.999488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:14.999502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:14.999515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:14.999557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:15.009301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:15.009457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:15.009483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:15.009498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:15.009510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:15.009545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:15.019353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:15.019547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:15.019574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:15.019594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:15.019606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:15.019637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:15.029390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:15.029575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:15.029600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:15.029615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:15.029627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:15.029658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:15.039394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:15.039548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:15.039573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:15.039587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:15.039600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:15.039630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:15.049437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:15.049587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:15.049612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:15.049626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:15.049639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:15.049668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:15.059569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:15.059735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.348 [2024-07-16 01:02:15.059768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.348 [2024-07-16 01:02:15.059784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.348 [2024-07-16 01:02:15.059796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.348 [2024-07-16 01:02:15.059825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-16 01:02:15.069481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.348 [2024-07-16 01:02:15.069643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.349 [2024-07-16 01:02:15.069668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.349 [2024-07-16 01:02:15.069682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.349 [2024-07-16 01:02:15.069695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.349 [2024-07-16 01:02:15.069729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-16 01:02:15.079585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.349 [2024-07-16 01:02:15.079732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.349 [2024-07-16 01:02:15.079757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.349 [2024-07-16 01:02:15.079771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.349 [2024-07-16 01:02:15.079783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.349 [2024-07-16 01:02:15.079813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-16 01:02:15.089638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.349 [2024-07-16 01:02:15.089803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.349 [2024-07-16 01:02:15.089829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.349 [2024-07-16 01:02:15.089843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.349 [2024-07-16 01:02:15.089855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.349 [2024-07-16 01:02:15.089892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-16 01:02:15.099611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.349 [2024-07-16 01:02:15.099763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.349 [2024-07-16 01:02:15.099788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.349 [2024-07-16 01:02:15.099802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.349 [2024-07-16 01:02:15.099815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.349 [2024-07-16 01:02:15.099850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.607 [2024-07-16 01:02:15.109603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.607 [2024-07-16 01:02:15.109756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.607 [2024-07-16 01:02:15.109781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.607 [2024-07-16 01:02:15.109795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.607 [2024-07-16 01:02:15.109808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.607 [2024-07-16 01:02:15.109838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.607 qpair failed and we were unable to recover it. 00:25:40.607 [2024-07-16 01:02:15.119668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.607 [2024-07-16 01:02:15.119863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.607 [2024-07-16 01:02:15.119900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.607 [2024-07-16 01:02:15.119918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.607 [2024-07-16 01:02:15.119931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.607 [2024-07-16 01:02:15.119962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.607 qpair failed and we were unable to recover it. 00:25:40.607 [2024-07-16 01:02:15.129672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.607 [2024-07-16 01:02:15.129823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.607 [2024-07-16 01:02:15.129848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.607 [2024-07-16 01:02:15.129862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.607 [2024-07-16 01:02:15.129883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.607 [2024-07-16 01:02:15.129929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.607 qpair failed and we were unable to recover it. 00:25:40.607 [2024-07-16 01:02:15.139758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.607 [2024-07-16 01:02:15.139907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.607 [2024-07-16 01:02:15.139933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.607 [2024-07-16 01:02:15.139947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.607 [2024-07-16 01:02:15.139960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.607 [2024-07-16 01:02:15.139991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.607 qpair failed and we were unable to recover it. 00:25:40.607 [2024-07-16 01:02:15.149729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.607 [2024-07-16 01:02:15.149893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.607 [2024-07-16 01:02:15.149919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.149934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.149946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.149975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.159771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.159992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.160018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.160032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.160045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.160074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.169842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.170001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.170027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.170041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.170054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.170085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.179837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.180021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.180047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.180061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.180073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.180103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.189872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.190043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.190068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.190083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.190101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.190133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.200001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.200164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.200190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.200204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.200217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.200247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.209933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.210112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.210138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.210155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.210169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.210200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.219978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.220131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.220160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.220175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.220189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.220218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.230119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.230275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.230300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.230315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.230327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.230357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.240017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.240202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.240228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.240243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.240254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.240297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.249994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.250140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.250166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.250180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.250192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.250222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.260037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.260186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.260211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.260225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.260238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.260267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.270102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.270263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.270290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.270304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.270317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.270347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.280099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.280250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.280276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.280297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.280318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.280354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.290123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.290281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.290307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.290321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.290334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.290363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.300146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.300342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.300367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.300381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.300394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.300425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.310216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.310372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.310397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.310411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.310424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.310455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.320209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.320360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.320385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.320399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.320412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.320441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.330225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.330385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.330411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.330426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.330439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.330469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.340261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.340417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.340444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.340463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.340476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.340506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.350351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.350528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.350554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.350569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.350582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.350610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.608 [2024-07-16 01:02:15.360347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.608 [2024-07-16 01:02:15.360508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.608 [2024-07-16 01:02:15.360533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.608 [2024-07-16 01:02:15.360547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.608 [2024-07-16 01:02:15.360560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.608 [2024-07-16 01:02:15.360601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.608 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.370367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.370524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.370555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.370571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.370584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.370613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.380384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.380534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.380559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.380573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.380586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.380616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.390432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.390609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.390634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.390649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.390661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.390690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.400549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.400714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.400739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.400753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.400766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.400795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.410449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.410593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.410618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.410632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.410644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.410680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.420475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.420621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.420651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.420665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.420677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.420707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.430562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.430741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.430766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.430780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.430793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.430822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.440533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.440688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.440712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.440726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.440738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.440768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.450574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.450727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.450752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.450766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.450779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.450808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.460593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.460738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.460769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.869 [2024-07-16 01:02:15.460784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.869 [2024-07-16 01:02:15.460797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.869 [2024-07-16 01:02:15.460838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.869 qpair failed and we were unable to recover it. 00:25:40.869 [2024-07-16 01:02:15.470654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.869 [2024-07-16 01:02:15.470854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.869 [2024-07-16 01:02:15.470887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.470903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.470916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.470946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.480651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.480805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.480830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.480844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.480857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.480893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.490714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.490868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.490900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.490915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.490928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.490957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.500732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.500894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.500922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.500940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.500953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.500990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.510814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.510973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.510999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.511013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.511025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.511054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.520760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.520961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.520987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.521001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.521014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.521043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.530796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.530954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.530979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.530994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.531008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.531037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.540816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.540966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.540991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.541005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.541017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.541048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.550861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.551043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.551073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.551088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.551101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.551130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.560915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.561082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.561107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.561122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.561134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.561164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.570899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.571045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.571070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.571084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.571096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.571127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.580947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.581096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.581125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.581140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.581153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.581184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.591038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.591191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.591217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.591232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.591250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.591279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.600999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.601154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.601179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.870 [2024-07-16 01:02:15.601193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.870 [2024-07-16 01:02:15.601206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.870 [2024-07-16 01:02:15.601238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.870 qpair failed and we were unable to recover it. 00:25:40.870 [2024-07-16 01:02:15.611067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.870 [2024-07-16 01:02:15.611216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.870 [2024-07-16 01:02:15.611242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.871 [2024-07-16 01:02:15.611262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.871 [2024-07-16 01:02:15.611276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.871 [2024-07-16 01:02:15.611306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.871 qpair failed and we were unable to recover it. 00:25:40.871 [2024-07-16 01:02:15.621049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:40.871 [2024-07-16 01:02:15.621216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:40.871 [2024-07-16 01:02:15.621242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:40.871 [2024-07-16 01:02:15.621257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:40.871 [2024-07-16 01:02:15.621270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:40.871 [2024-07-16 01:02:15.621300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.871 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.631093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.631248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.631273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.631288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.631301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.631330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.641144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.641307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.641332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.641346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.641359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.641388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.651224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.651372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.651398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.651412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.651424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.651453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.661157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.661299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.661324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.661339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.661351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.661382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.671351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.671575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.671600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.671615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.671627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.671656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.681212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.681361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.681386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.681407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.681420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.681449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.691244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.691389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.691414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.691428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.691440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.691469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.701360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.701516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.701541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.701556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.701568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.131 [2024-07-16 01:02:15.701598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.131 qpair failed and we were unable to recover it. 00:25:41.131 [2024-07-16 01:02:15.711437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.131 [2024-07-16 01:02:15.711590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.131 [2024-07-16 01:02:15.711615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.131 [2024-07-16 01:02:15.711630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.131 [2024-07-16 01:02:15.711642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.711671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.721455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.721617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.721642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.721656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.721669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.721699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.731488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.731658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.731683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.731696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.731709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.731738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.741422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.741593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.741619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.741633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.741646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.741674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.751442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.751595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.751621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.751635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.751648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.751690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.761474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.761676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.761702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.761717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.761729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.761758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.771576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.771729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.771754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.771774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.771787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.771817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.781532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.781682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.781707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.781721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.781734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.781763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.791558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.791713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.791739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.791753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.791766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.791796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.801614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.801810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.801836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.801850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.801863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.801898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.811727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.811914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.811939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.811953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.811966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.811995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.821634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.821781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.821806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.821819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.821832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.821862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.831802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.831954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.831981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.831998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.832013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.832044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.841727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.841888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.841915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.841929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.841941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.841973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.851738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.851897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.851923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.851938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.851952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.851981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.861739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.861899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.861940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.861956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.861969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae18000b90 00:25:41.132 [2024-07-16 01:02:15.862001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.871903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.872074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.872107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.872125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.872139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.132 [2024-07-16 01:02:15.872169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.132 [2024-07-16 01:02:15.881918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.132 [2024-07-16 01:02:15.882121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.132 [2024-07-16 01:02:15.882148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.132 [2024-07-16 01:02:15.882163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.132 [2024-07-16 01:02:15.882176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.132 [2024-07-16 01:02:15.882205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.132 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.891840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.891998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.892028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.892043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.892056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.892084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.901871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.902064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.902090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.902105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.902118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.902152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.911997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.912153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.912179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.912193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.912206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.912234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.921930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.922087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.922112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.922127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.922139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.922168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.931955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.932109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.932135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.932149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.932162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.932189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.941979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.942122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.942147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.942161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.942173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.942200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.952024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.952179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.952212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.952227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.952240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.952268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.962069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.962222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.962247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.962262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.962274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.962302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.972102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.972253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.972278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.972292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.972305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.972332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.982193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.982337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.982362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.982376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.982388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.982415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:15.992175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:15.992339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:15.992362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:15.992376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:15.992387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:15.992419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.002146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.002299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.002324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.002338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:16.002351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:16.002379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.012209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.012357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.012382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.012397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:16.012410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:16.012438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.022211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.022403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.022429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.022443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:16.022455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:16.022483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.032238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.032417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.032441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.032455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:16.032467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:16.032494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.042253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.042399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.042429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.042445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:16.042457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:16.042485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.052305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.052464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.052490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.052504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:16.052516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:16.052544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.062323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.062509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.062534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.062548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.393 [2024-07-16 01:02:16.062560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.393 [2024-07-16 01:02:16.062588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-07-16 01:02:16.072382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.393 [2024-07-16 01:02:16.072533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.393 [2024-07-16 01:02:16.072558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.393 [2024-07-16 01:02:16.072572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.072584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.072612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-07-16 01:02:16.082422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.394 [2024-07-16 01:02:16.082613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.394 [2024-07-16 01:02:16.082638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.394 [2024-07-16 01:02:16.082653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.082670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.082701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-07-16 01:02:16.092449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.394 [2024-07-16 01:02:16.092620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.394 [2024-07-16 01:02:16.092645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.394 [2024-07-16 01:02:16.092660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.092672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.092700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-07-16 01:02:16.102533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.394 [2024-07-16 01:02:16.102686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.394 [2024-07-16 01:02:16.102713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.394 [2024-07-16 01:02:16.102726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.102739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.102767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-07-16 01:02:16.112479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.394 [2024-07-16 01:02:16.112634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.394 [2024-07-16 01:02:16.112660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.394 [2024-07-16 01:02:16.112673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.112686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.112713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-07-16 01:02:16.122547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.394 [2024-07-16 01:02:16.122738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.394 [2024-07-16 01:02:16.122763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.394 [2024-07-16 01:02:16.122777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.122790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.122817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-07-16 01:02:16.132532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.394 [2024-07-16 01:02:16.132684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.394 [2024-07-16 01:02:16.132709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.394 [2024-07-16 01:02:16.132723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.132735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.132764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-07-16 01:02:16.142574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.394 [2024-07-16 01:02:16.142718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.394 [2024-07-16 01:02:16.142744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.394 [2024-07-16 01:02:16.142758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.394 [2024-07-16 01:02:16.142771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.394 [2024-07-16 01:02:16.142798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.152598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.152753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.152779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.152796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.152809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.152836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.162612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.162758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.162784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.162798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.162811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.162838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.172637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.172779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.172804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.172818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.172836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.172865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.182692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.182842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.182868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.182893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.182908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.182936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.192718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.192873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.192907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.192922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.192934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.192964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.202721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.202869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.202900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.202915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.202927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.202955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.212768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.212958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.212984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.213002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.213016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.213044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.222785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.222951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.222977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.222991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.223003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.223031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.232835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.233017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.233043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-16 01:02:16.233057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-16 01:02:16.233069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.654 [2024-07-16 01:02:16.233097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-16 01:02:16.242836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-16 01:02:16.242995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-16 01:02:16.243021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.243035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.243047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.243075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.252857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.253012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.253037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.253051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.253063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.253091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.262882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.263029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.263054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.263068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.263086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.263114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.272959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.273144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.273169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.273183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.273196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.273224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.282936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.283086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.283111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.283125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.283138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.283165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.292988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.293141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.293165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.293179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.293191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.293219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.303031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.303185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.303210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.303224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.303237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.303264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.313067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.313228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.313253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.313267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.313280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.313307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.323070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.323222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.323247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.323261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.323273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.323301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.333100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.333245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.333270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.333284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.333297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.333326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.343126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.343273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.343298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-16 01:02:16.343312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-16 01:02:16.343324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.655 [2024-07-16 01:02:16.343352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-16 01:02:16.353220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-16 01:02:16.353428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-16 01:02:16.353453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-16 01:02:16.353473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-16 01:02:16.353486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.656 [2024-07-16 01:02:16.353514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-16 01:02:16.363259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-16 01:02:16.363452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-16 01:02:16.363476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-16 01:02:16.363490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-16 01:02:16.363502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.656 [2024-07-16 01:02:16.363529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-16 01:02:16.373280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-16 01:02:16.373478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-16 01:02:16.373505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-16 01:02:16.373524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-16 01:02:16.373537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.656 [2024-07-16 01:02:16.373567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-16 01:02:16.383229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-16 01:02:16.383373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-16 01:02:16.383399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-16 01:02:16.383413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-16 01:02:16.383426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.656 [2024-07-16 01:02:16.383454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-16 01:02:16.393367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-16 01:02:16.393524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-16 01:02:16.393550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-16 01:02:16.393564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-16 01:02:16.393577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.656 [2024-07-16 01:02:16.393605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-16 01:02:16.403302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-16 01:02:16.403456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-16 01:02:16.403481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-16 01:02:16.403495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-16 01:02:16.403508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.656 [2024-07-16 01:02:16.403535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-16 01:02:16.413458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-16 01:02:16.413629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-16 01:02:16.413654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-16 01:02:16.413668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-16 01:02:16.413681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.916 [2024-07-16 01:02:16.413709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-16 01:02:16.423438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.423587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.423613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.423627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.423640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.423667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.433441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.433623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.433648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.433662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.433675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.433702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.443425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.443576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.443601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.443622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.443635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.443665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.453469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.453615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.453640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.453654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.453667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.453694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.463452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.463625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.463650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.463665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.463678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.463705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.473519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.473685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.473710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.473724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.473736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.473764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.483534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.483712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.483737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.483752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.483764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.483792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.493578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.493726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.493752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.493766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.493778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.493806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.503569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.503732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.503758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.503771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.503783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.503812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.513642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.513824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.513849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.513863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.513887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.513919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.523647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.523837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.523862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.523894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.523909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.523937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.533710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.533860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.533899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.533920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.533934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.533962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.543681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.543831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.543857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.543870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.543891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.543920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.553715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.553873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.553909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-16 01:02:16.553923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-16 01:02:16.553936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.917 [2024-07-16 01:02:16.553963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-16 01:02:16.563792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-16 01:02:16.563951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-16 01:02:16.563977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.563991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.564003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.564031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.573776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.573977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.574002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.574016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.574028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.574056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.583841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.584053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.584079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.584093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.584105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.584133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.593936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.594087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.594112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.594126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.594139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.594166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.603869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.604023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.604048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.604062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.604074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.604102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.613867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.614030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.614055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.614070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.614082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.614109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.623950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.624101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.624127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.624151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.624163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.624192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.634022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.634171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.634196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.634210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.634223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.634250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.644020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.644175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.644199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.644214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.644226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.644254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.653984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.654131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.654155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.654170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.654182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.654209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-16 01:02:16.664057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-16 01:02:16.664212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-16 01:02:16.664236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-16 01:02:16.664250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-16 01:02:16.664263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:41.918 [2024-07-16 01:02:16.664290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-16 01:02:16.674114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-16 01:02:16.674297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-16 01:02:16.674322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-16 01:02:16.674336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-16 01:02:16.674349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.178 [2024-07-16 01:02:16.674376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-16 01:02:16.684092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-16 01:02:16.684236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-16 01:02:16.684261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-16 01:02:16.684275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-16 01:02:16.684287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.178 [2024-07-16 01:02:16.684315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-16 01:02:16.694099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-16 01:02:16.694247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-16 01:02:16.694272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-16 01:02:16.694286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-16 01:02:16.694298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.178 [2024-07-16 01:02:16.694325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-16 01:02:16.704209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.704383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.704411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.704425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.704438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.704467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.714173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.714324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.714354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.714369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.714381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.714409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.724215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.724371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.724396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.724411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.724423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.724451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.734233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.734382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.734407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.734421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.734434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.734461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.744324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.744471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.744496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.744511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.744523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.744551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.754336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.754487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.754512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.754526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.754538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.754571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.764404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.764551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.764577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.764591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.764603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.764630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.774371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.774562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.774587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.774601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.774613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.774641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.784405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.784599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.784625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.784639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.784651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.784678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.794422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.794594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.794619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.794633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.794645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.794672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-16 01:02:16.804440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-16 01:02:16.804593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-16 01:02:16.804622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-16 01:02:16.804637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-16 01:02:16.804650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.179 [2024-07-16 01:02:16.804678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.814471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.814621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.814646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.814660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.814673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.814700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.824469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.824617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.824642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.824657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.824669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.824696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.834549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.834703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.834728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.834742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.834754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.834782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.844566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.844728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.844755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.844774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.844787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.844821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.854579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.854729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.854754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.854769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.854781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.854809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.864613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.864759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.864787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.864803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.864816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.864846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.874631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.874817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.874842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.874856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.874869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.874903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.884688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.884868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.884900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.884915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.884927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.884955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.894663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.894834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.894865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.894890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.894905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.894933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.904688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.904831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.904856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.904870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.904890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.904919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.914774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-16 01:02:16.914957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-16 01:02:16.914982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-16 01:02:16.914996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-16 01:02:16.915008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.180 [2024-07-16 01:02:16.915036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-16 01:02:16.924750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.181 [2024-07-16 01:02:16.924953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.181 [2024-07-16 01:02:16.924979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.181 [2024-07-16 01:02:16.924993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.181 [2024-07-16 01:02:16.925007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.181 [2024-07-16 01:02:16.925035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.181 qpair failed and we were unable to recover it. 00:25:42.181 [2024-07-16 01:02:16.934790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.181 [2024-07-16 01:02:16.934953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.181 [2024-07-16 01:02:16.934978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.181 [2024-07-16 01:02:16.934992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.181 [2024-07-16 01:02:16.935004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.181 [2024-07-16 01:02:16.935038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.181 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:16.944838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:16.945021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:16.945046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:16.945061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:16.945072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:16.945099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:16.954908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:16.955066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:16.955091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:16.955105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:16.955117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:16.955145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:16.964872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:16.965030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:16.965056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:16.965070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:16.965083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:16.965110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:16.974934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:16.975094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:16.975118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:16.975132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:16.975145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:16.975172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:16.984947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:16.985109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:16.985139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:16.985154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:16.985167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:16.985194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:16.995019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:16.995172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:16.995195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:16.995209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:16.995220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:16.995247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:17.005085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:17.005251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:17.005276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:17.005290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:17.005302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:17.005330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-16 01:02:17.015025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-16 01:02:17.015240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-16 01:02:17.015265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-16 01:02:17.015279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-16 01:02:17.015292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.441 [2024-07-16 01:02:17.015319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.025061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.025261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.025287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.025300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.025318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.025347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.035115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.035270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.035295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.035309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.035321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.035349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.045094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.045239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.045264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.045278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.045291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.045318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.055242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.055407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.055432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.055446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.055459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.055488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.065211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.065364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.065389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.065403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.065416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.065443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.075192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.075348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.075372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.075386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.075399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.075426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.085319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.085474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.085499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.085513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.085526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.085553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.095233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.095405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.095430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.095444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.095457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.095484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.105269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.105416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.105442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.105457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.105469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.105497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.115297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.115454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.115479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.442 [2024-07-16 01:02:17.115493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.442 [2024-07-16 01:02:17.115511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.442 [2024-07-16 01:02:17.115539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.442 qpair failed and we were unable to recover it. 00:25:42.442 [2024-07-16 01:02:17.125412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.442 [2024-07-16 01:02:17.125603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.442 [2024-07-16 01:02:17.125628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.125642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.125654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.125682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.443 [2024-07-16 01:02:17.135390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.443 [2024-07-16 01:02:17.135539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.443 [2024-07-16 01:02:17.135563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.135578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.135590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.135618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.443 [2024-07-16 01:02:17.145420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.443 [2024-07-16 01:02:17.145569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.443 [2024-07-16 01:02:17.145593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.145607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.145620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.145648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.443 [2024-07-16 01:02:17.155512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.443 [2024-07-16 01:02:17.155730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.443 [2024-07-16 01:02:17.155755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.155769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.155782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.155809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.443 [2024-07-16 01:02:17.165537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.443 [2024-07-16 01:02:17.165720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.443 [2024-07-16 01:02:17.165746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.165760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.165772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.165800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.443 [2024-07-16 01:02:17.175563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.443 [2024-07-16 01:02:17.175708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.443 [2024-07-16 01:02:17.175733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.175748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.175760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.175788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.443 [2024-07-16 01:02:17.185502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.443 [2024-07-16 01:02:17.185646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.443 [2024-07-16 01:02:17.185671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.185685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.185698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.185725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.443 [2024-07-16 01:02:17.195558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.443 [2024-07-16 01:02:17.195725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.443 [2024-07-16 01:02:17.195750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.443 [2024-07-16 01:02:17.195765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.443 [2024-07-16 01:02:17.195778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.443 [2024-07-16 01:02:17.195805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.443 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.205590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.205754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.205779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.205793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.205811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.205840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.215602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.215755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.215780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.215794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.215806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.215834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.225632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.225779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.225804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.225817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.225830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.225857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.235664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.235847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.235871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.235893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.235906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.235935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.245679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.245835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.245860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.245874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.245894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.245923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.255704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.255858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.255888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.255904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.255916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.255944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.265773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.265953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.265978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.265992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.266004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.266032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.275769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.275966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.275991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.276005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.276017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.276044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.285791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.285944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.705 [2024-07-16 01:02:17.285969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.705 [2024-07-16 01:02:17.285983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.705 [2024-07-16 01:02:17.285996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.705 [2024-07-16 01:02:17.286023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.705 qpair failed and we were unable to recover it. 00:25:42.705 [2024-07-16 01:02:17.295841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.705 [2024-07-16 01:02:17.295996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.296021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.296035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.296052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.296081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.305858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.306039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.306067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.306088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.306101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.306130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.315890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.316052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.316077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.316092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.316104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.316132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.325916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.326066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.326091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.326104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.326117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.326144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.335932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.336086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.336112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.336126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.336138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.336169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.345985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.346155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.346181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.346195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.346208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.346235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.356006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.356189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.356214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.356228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.356241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.356268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.366046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.366196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.366221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.366235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.366247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.366275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.376070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.376213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.376238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.376252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.376265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.376292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.386057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.386199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.386224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.386244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.386257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.386284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.396180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.396349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.396374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.396387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.396400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.396427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.706 [2024-07-16 01:02:17.406177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.706 [2024-07-16 01:02:17.406336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.706 [2024-07-16 01:02:17.406361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.706 [2024-07-16 01:02:17.406375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.706 [2024-07-16 01:02:17.406388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.706 [2024-07-16 01:02:17.406415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.706 qpair failed and we were unable to recover it. 00:25:42.707 [2024-07-16 01:02:17.416262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.707 [2024-07-16 01:02:17.416410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.707 [2024-07-16 01:02:17.416434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.707 [2024-07-16 01:02:17.416448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.707 [2024-07-16 01:02:17.416461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.707 [2024-07-16 01:02:17.416488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.707 qpair failed and we were unable to recover it. 00:25:42.707 [2024-07-16 01:02:17.426207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.707 [2024-07-16 01:02:17.426353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.707 [2024-07-16 01:02:17.426377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.707 [2024-07-16 01:02:17.426391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.707 [2024-07-16 01:02:17.426404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.707 [2024-07-16 01:02:17.426431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.707 qpair failed and we were unable to recover it. 00:25:42.707 [2024-07-16 01:02:17.436275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.707 [2024-07-16 01:02:17.436473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.707 [2024-07-16 01:02:17.436499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.707 [2024-07-16 01:02:17.436513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.707 [2024-07-16 01:02:17.436525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.707 [2024-07-16 01:02:17.436552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.707 qpair failed and we were unable to recover it. 00:25:42.707 [2024-07-16 01:02:17.446305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.707 [2024-07-16 01:02:17.446472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.707 [2024-07-16 01:02:17.446497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.707 [2024-07-16 01:02:17.446511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.707 [2024-07-16 01:02:17.446523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.707 [2024-07-16 01:02:17.446550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.707 qpair failed and we were unable to recover it. 00:25:42.707 [2024-07-16 01:02:17.456290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.707 [2024-07-16 01:02:17.456456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.707 [2024-07-16 01:02:17.456481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.707 [2024-07-16 01:02:17.456496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.707 [2024-07-16 01:02:17.456508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.707 [2024-07-16 01:02:17.456535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.707 qpair failed and we were unable to recover it. 00:25:42.996 [2024-07-16 01:02:17.466300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.996 [2024-07-16 01:02:17.466443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.996 [2024-07-16 01:02:17.466468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.996 [2024-07-16 01:02:17.466483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.996 [2024-07-16 01:02:17.466495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.996 [2024-07-16 01:02:17.466522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.996 qpair failed and we were unable to recover it. 00:25:42.996 [2024-07-16 01:02:17.476374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.996 [2024-07-16 01:02:17.476545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.996 [2024-07-16 01:02:17.476571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.996 [2024-07-16 01:02:17.476591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.996 [2024-07-16 01:02:17.476605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.996 [2024-07-16 01:02:17.476633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.996 qpair failed and we were unable to recover it. 00:25:42.996 [2024-07-16 01:02:17.486377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.996 [2024-07-16 01:02:17.486525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.996 [2024-07-16 01:02:17.486549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.996 [2024-07-16 01:02:17.486563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.996 [2024-07-16 01:02:17.486576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.996 [2024-07-16 01:02:17.486604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.996 qpair failed and we were unable to recover it. 00:25:42.996 [2024-07-16 01:02:17.496442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.996 [2024-07-16 01:02:17.496590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.996 [2024-07-16 01:02:17.496614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.996 [2024-07-16 01:02:17.496628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.996 [2024-07-16 01:02:17.496641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.996 [2024-07-16 01:02:17.496668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.996 qpair failed and we were unable to recover it. 00:25:42.996 [2024-07-16 01:02:17.506475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.506624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.506649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.506664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.506676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.506703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.516469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.516623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.516648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.516663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.516675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.516702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.526542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.526752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.526777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.526792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.526805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.526833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.536509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.536661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.536687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.536701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.536713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.536741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.546581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.546739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.546765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.546779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.546792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.546820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.556664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.556815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.556840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.556853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.556867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.556901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.566634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.566789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.566813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.566833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.566847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.566875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.576632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.576811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.576835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.576849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.576862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.576895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.586659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.586815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.586840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.586855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.586867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.586902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.596723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.596891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.596916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.596930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.596942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.596970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.997 qpair failed and we were unable to recover it. 00:25:42.997 [2024-07-16 01:02:17.606718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.997 [2024-07-16 01:02:17.606900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.997 [2024-07-16 01:02:17.606933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.997 [2024-07-16 01:02:17.606948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.997 [2024-07-16 01:02:17.606962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.997 [2024-07-16 01:02:17.606990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.616755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.616910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.616935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.616950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.616963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.616990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.626782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.626934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.626960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.626974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.626988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.627015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.636905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.637057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.637082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.637096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.637108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.637136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.646860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.647032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.647058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.647072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.647085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.647112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.656859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.657014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.657044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.657059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.657072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.657100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.666914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.667090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.667115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.667129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.667142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.667169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.676929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.677123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.677148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.677163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.677175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.677203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.686974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.687124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.687149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.687163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.687175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.687202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.697032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.697182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.697208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.697222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.697234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.697262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.707103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.707246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.707272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.998 [2024-07-16 01:02:17.707286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.998 [2024-07-16 01:02:17.707298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.998 [2024-07-16 01:02:17.707325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.998 qpair failed and we were unable to recover it. 00:25:42.998 [2024-07-16 01:02:17.717118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.998 [2024-07-16 01:02:17.717288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.998 [2024-07-16 01:02:17.717313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.999 [2024-07-16 01:02:17.717327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.999 [2024-07-16 01:02:17.717339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.999 [2024-07-16 01:02:17.717366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.999 qpair failed and we were unable to recover it. 00:25:42.999 [2024-07-16 01:02:17.727106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.999 [2024-07-16 01:02:17.727257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.999 [2024-07-16 01:02:17.727282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.999 [2024-07-16 01:02:17.727296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.999 [2024-07-16 01:02:17.727310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.999 [2024-07-16 01:02:17.727338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.999 qpair failed and we were unable to recover it. 00:25:42.999 [2024-07-16 01:02:17.737122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.999 [2024-07-16 01:02:17.737266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.999 [2024-07-16 01:02:17.737291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.999 [2024-07-16 01:02:17.737306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.999 [2024-07-16 01:02:17.737318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.999 [2024-07-16 01:02:17.737345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.999 qpair failed and we were unable to recover it. 00:25:42.999 [2024-07-16 01:02:17.747154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.999 [2024-07-16 01:02:17.747334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.999 [2024-07-16 01:02:17.747368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.999 [2024-07-16 01:02:17.747384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.999 [2024-07-16 01:02:17.747397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:42.999 [2024-07-16 01:02:17.747425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.999 qpair failed and we were unable to recover it. 00:25:43.261 [2024-07-16 01:02:17.757212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.757365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.757390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.757404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.757417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.757444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.767261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.767413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.767438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.767453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.767465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.767493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.777234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.777377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.777402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.777416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.777429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.777457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.787319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.787498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.787523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.787537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.787550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.787584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.797327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.797484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.797509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.797523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.797535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.797563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.807355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.807522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.807548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.807562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.807574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.807602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.817370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.817517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.817542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.817556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.817568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.817595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.827365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.827503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.827528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.827542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.827554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.827582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.837493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.837667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.837696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.837710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.837723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.837750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.847423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.847569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.847593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.847608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.847621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.847650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.857467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.857615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.857639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.857652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.857665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.857693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.867477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.867626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.867651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.867665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.867677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.867705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.877537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.877693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.877718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.877733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.877746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.877781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.887548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.887694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.887719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.887734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.262 [2024-07-16 01:02:17.887746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.262 [2024-07-16 01:02:17.887774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.262 qpair failed and we were unable to recover it. 00:25:43.262 [2024-07-16 01:02:17.897568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.262 [2024-07-16 01:02:17.897715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.262 [2024-07-16 01:02:17.897740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.262 [2024-07-16 01:02:17.897754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.897766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.897796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.907617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.907767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.907792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.907806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.907818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.907846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.917749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.917915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.917940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.917954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.917966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.917994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.927647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.927813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.927843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.927858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.927871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.927906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.937678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.937833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.937858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.937872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.937893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.937922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.947718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.947871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.947902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.947917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.947928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.947956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.957776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.957933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.957958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.957971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.957984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.958013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.967801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.967958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.967984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.967998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.968011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.968044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.977783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.977932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.977957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.977971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.977983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.978011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.987824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.987974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.987999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.988013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.988025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.988054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:17.997950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:17.998144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:17.998167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:17.998180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:17.998191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:17.998220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.263 [2024-07-16 01:02:18.007940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.263 [2024-07-16 01:02:18.008098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.263 [2024-07-16 01:02:18.008123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.263 [2024-07-16 01:02:18.008137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.263 [2024-07-16 01:02:18.008149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.263 [2024-07-16 01:02:18.008177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.263 qpair failed and we were unable to recover it. 00:25:43.524 [2024-07-16 01:02:18.017913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.524 [2024-07-16 01:02:18.018119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.524 [2024-07-16 01:02:18.018149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.524 [2024-07-16 01:02:18.018164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.524 [2024-07-16 01:02:18.018177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.524 [2024-07-16 01:02:18.018205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.524 qpair failed and we were unable to recover it. 00:25:43.524 [2024-07-16 01:02:18.027946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.524 [2024-07-16 01:02:18.028098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.524 [2024-07-16 01:02:18.028123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.524 [2024-07-16 01:02:18.028137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.524 [2024-07-16 01:02:18.028149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.524 [2024-07-16 01:02:18.028177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.524 qpair failed and we were unable to recover it. 00:25:43.524 [2024-07-16 01:02:18.037973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.524 [2024-07-16 01:02:18.038124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.524 [2024-07-16 01:02:18.038149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.524 [2024-07-16 01:02:18.038163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.524 [2024-07-16 01:02:18.038175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.524 [2024-07-16 01:02:18.038202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.524 qpair failed and we were unable to recover it. 00:25:43.524 [2024-07-16 01:02:18.048009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.524 [2024-07-16 01:02:18.048155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.524 [2024-07-16 01:02:18.048180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.048194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.048207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.048234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.058064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.058258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.058283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.058297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.058314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.058342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.068061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.068213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.068238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.068252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.068264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.068292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.078116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.078270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.078296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.078310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.078323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.078350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.088114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.088265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.088291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.088304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.088317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.088344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.098168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.098318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.098343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.098357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.098369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.098397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.108218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.108435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.108460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.108475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.108487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.108514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.118225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.118381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.118406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.118420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.118432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.118460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.128226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.128378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.128403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.128417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.128429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.128457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.138323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.138531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.138556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.138570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.138582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.138609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.148291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.148455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.148480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.148494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.148516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.148544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.158393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.158604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.158630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.158648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.158662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.158691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.168339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.525 [2024-07-16 01:02:18.168494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.525 [2024-07-16 01:02:18.168520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.525 [2024-07-16 01:02:18.168534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.525 [2024-07-16 01:02:18.168546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9733f0 00:25:43.525 [2024-07-16 01:02:18.168574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.525 qpair failed and we were unable to recover it. 00:25:43.525 [2024-07-16 01:02:18.168684] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:43.525 A controller has encountered a failure and is being reset. 00:25:43.525 Controller properly reset. 00:25:43.525 Initializing NVMe Controllers 00:25:43.525 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:43.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:43.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:43.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:43.525 Initialization complete. Launching workers. 00:25:43.525 Starting thread on core 1 00:25:43.525 Starting thread on core 2 00:25:43.525 Starting thread on core 3 00:25:43.525 Starting thread on core 0 00:25:43.525 01:02:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:43.525 00:25:43.525 real 0m10.770s 00:25:43.525 user 0m17.680s 00:25:43.525 sys 0m5.557s 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:43.526 ************************************ 00:25:43.526 END TEST nvmf_target_disconnect_tc2 00:25:43.526 ************************************ 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.526 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.526 rmmod nvme_tcp 00:25:43.526 rmmod nvme_fabrics 00:25:43.784 rmmod nvme_keyring 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2750250 ']' 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2750250 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2750250 ']' 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2750250 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750250 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750250' 00:25:43.784 killing process with pid 2750250 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2750250 00:25:43.784 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2750250 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.043 01:02:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.957 01:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.957 00:25:45.957 real 0m15.606s 00:25:45.957 user 0m43.497s 00:25:45.957 sys 0m7.611s 00:25:45.957 01:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.957 01:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.957 ************************************ 00:25:45.957 END TEST nvmf_target_disconnect 00:25:45.957 ************************************ 00:25:45.957 01:02:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:45.957 01:02:20 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:45.957 01:02:20 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.957 01:02:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.957 01:02:20 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:45.957 00:25:45.957 real 19m40.215s 00:25:45.957 user 46m15.572s 00:25:45.957 sys 4m59.353s 00:25:45.957 01:02:20 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.957 01:02:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.957 ************************************ 00:25:45.957 END TEST nvmf_tcp 00:25:45.957 ************************************ 00:25:45.957 01:02:20 -- common/autotest_common.sh@1142 -- # return 0 00:25:45.957 01:02:20 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:45.957 01:02:20 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:45.957 01:02:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:45.957 01:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.957 01:02:20 -- common/autotest_common.sh@10 -- # set +x 00:25:46.216 ************************************ 00:25:46.216 START TEST spdkcli_nvmf_tcp 00:25:46.216 ************************************ 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:46.216 * Looking for test storage... 00:25:46.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2751438 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2751438 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2751438 ']' 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.216 01:02:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.216 [2024-07-16 01:02:20.832066] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:25:46.216 [2024-07-16 01:02:20.832162] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751438 ] 00:25:46.216 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.216 [2024-07-16 01:02:20.892957] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:46.476 [2024-07-16 01:02:21.016488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.476 [2024-07-16 01:02:21.016513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.476 01:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:46.476 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:46.476 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:46.476 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:46.476 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:46.476 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:46.476 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:46.476 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:46.476 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:46.476 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:46.476 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:46.476 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:46.476 ' 00:25:49.011 [2024-07-16 01:02:23.674110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.387 [2024-07-16 01:02:24.910545] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:52.922 [2024-07-16 01:02:27.197705] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:54.824 [2024-07-16 01:02:29.152045] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:56.200 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:56.200 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:56.200 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:56.200 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:56.200 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:56.200 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:56.200 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:56.200 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.200 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.200 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:56.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:56.200 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:56.200 01:02:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.767 01:02:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:56.767 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:56.767 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:56.767 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:56.767 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:56.767 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:56.767 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:56.767 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:56.767 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:56.767 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:56.767 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:56.767 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:56.767 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:56.767 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:56.767 ' 00:26:02.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:02.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:02.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:02.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:02.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:02.089 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:02.089 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:02.089 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:02.089 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:02.089 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:02.090 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:02.090 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:02.090 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:02.090 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2751438 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2751438 ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2751438 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2751438 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2751438' 00:26:02.090 killing process with pid 2751438 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2751438 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2751438 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2751438 ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2751438 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2751438 ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2751438 00:26:02.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2751438) - No such process 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2751438 is not found' 00:26:02.090 Process with pid 2751438 is not found 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:02.090 00:26:02.090 real 0m16.105s 00:26:02.090 user 0m34.034s 00:26:02.090 sys 0m0.789s 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.090 01:02:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.090 ************************************ 00:26:02.090 END TEST spdkcli_nvmf_tcp 00:26:02.090 ************************************ 00:26:02.348 01:02:36 -- common/autotest_common.sh@1142 -- # return 0 00:26:02.348 01:02:36 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:02.348 01:02:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:02.348 01:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.348 01:02:36 -- common/autotest_common.sh@10 -- # set +x 00:26:02.348 ************************************ 00:26:02.348 START TEST nvmf_identify_passthru 00:26:02.348 ************************************ 00:26:02.348 01:02:36 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:02.348 * Looking for test storage... 00:26:02.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.348 01:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.348 01:02:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.348 01:02:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.348 01:02:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:02.348 01:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.348 01:02:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.348 01:02:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.348 01:02:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:02.348 01:02:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.348 01:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.348 01:02:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:02.348 01:02:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:02.348 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:02.349 01:02:36 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.349 01:02:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.268 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.269 01:02:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:26:04.529 00:26:04.529 --- 10.0.0.2 ping statistics --- 00:26:04.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.529 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:26:04.529 00:26:04.529 --- 10.0.0.1 ping statistics --- 00:26:04.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.529 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.529 01:02:39 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.529 01:02:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:04.529 01:02:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:26:04.529 01:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:26:04.529 01:02:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:26:04.529 01:02:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:26:04.529 01:02:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:26:04.529 01:02:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:04.529 01:02:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:04.529 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.717 01:02:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:26:08.717 01:02:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:26:08.717 01:02:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:08.717 01:02:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:08.717 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.941 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:12.941 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.941 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.941 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2756070 00:26:12.941 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:12.941 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:12.941 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2756070 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2756070 ']' 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.941 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.941 [2024-07-16 01:02:47.664280] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:26:12.941 [2024-07-16 01:02:47.664376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.941 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.197 [2024-07-16 01:02:47.730163] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.197 [2024-07-16 01:02:47.835829] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.198 [2024-07-16 01:02:47.835887] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.198 [2024-07-16 01:02:47.835917] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.198 [2024-07-16 01:02:47.835928] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.198 [2024-07-16 01:02:47.835937] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.198 [2024-07-16 01:02:47.835992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.198 [2024-07-16 01:02:47.836053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.198 [2024-07-16 01:02:47.836119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.198 [2024-07-16 01:02:47.836122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.198 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.198 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:26:13.198 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:13.198 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.198 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:13.198 INFO: Log level set to 20 00:26:13.198 INFO: Requests: 00:26:13.198 { 00:26:13.198 "jsonrpc": "2.0", 00:26:13.198 "method": "nvmf_set_config", 00:26:13.198 "id": 1, 00:26:13.198 "params": { 00:26:13.198 "admin_cmd_passthru": { 00:26:13.198 "identify_ctrlr": true 00:26:13.198 } 00:26:13.198 } 00:26:13.198 } 00:26:13.198 00:26:13.198 INFO: response: 00:26:13.198 { 00:26:13.198 "jsonrpc": "2.0", 00:26:13.198 "id": 1, 00:26:13.198 "result": true 00:26:13.198 } 00:26:13.198 00:26:13.198 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.198 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:13.198 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.198 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:13.198 INFO: Setting log level to 20 00:26:13.198 INFO: Setting log level to 20 00:26:13.198 INFO: Log level set to 20 00:26:13.198 INFO: Log level set to 20 00:26:13.198 INFO: Requests: 00:26:13.198 { 00:26:13.198 "jsonrpc": "2.0", 00:26:13.198 "method": "framework_start_init", 00:26:13.198 "id": 1 00:26:13.198 } 00:26:13.198 00:26:13.198 INFO: Requests: 00:26:13.198 { 00:26:13.198 "jsonrpc": "2.0", 00:26:13.198 "method": "framework_start_init", 00:26:13.198 "id": 1 00:26:13.198 } 00:26:13.198 00:26:13.503 [2024-07-16 01:02:47.980096] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:13.503 INFO: response: 00:26:13.503 { 00:26:13.503 "jsonrpc": "2.0", 00:26:13.503 "id": 1, 00:26:13.503 "result": true 00:26:13.503 } 00:26:13.503 00:26:13.503 INFO: response: 00:26:13.503 { 00:26:13.503 "jsonrpc": "2.0", 00:26:13.503 "id": 1, 00:26:13.503 "result": true 00:26:13.503 } 00:26:13.503 00:26:13.503 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.503 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.503 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.503 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:13.503 INFO: Setting log level to 40 00:26:13.503 INFO: Setting log level to 40 00:26:13.503 INFO: Setting log level to 40 00:26:13.503 [2024-07-16 01:02:47.990136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.503 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.503 01:02:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:13.503 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:13.503 01:02:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:13.503 01:02:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:26:13.503 01:02:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.503 01:02:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.790 Nvme0n1 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.790 01:02:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.790 01:02:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.790 01:02:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.790 [2024-07-16 01:02:50.878602] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.790 01:02:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.790 [ 00:26:16.790 { 00:26:16.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:16.790 "subtype": "Discovery", 00:26:16.790 "listen_addresses": [], 00:26:16.790 "allow_any_host": true, 00:26:16.790 "hosts": [] 00:26:16.790 }, 00:26:16.790 { 00:26:16.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.790 "subtype": "NVMe", 00:26:16.790 "listen_addresses": [ 00:26:16.790 { 00:26:16.790 "trtype": "TCP", 00:26:16.790 "adrfam": "IPv4", 00:26:16.790 "traddr": "10.0.0.2", 00:26:16.790 "trsvcid": "4420" 00:26:16.790 } 00:26:16.790 ], 00:26:16.790 "allow_any_host": true, 00:26:16.790 "hosts": [], 00:26:16.790 "serial_number": "SPDK00000000000001", 00:26:16.790 "model_number": "SPDK bdev Controller", 00:26:16.790 "max_namespaces": 1, 00:26:16.790 "min_cntlid": 1, 00:26:16.790 "max_cntlid": 65519, 00:26:16.790 "namespaces": [ 00:26:16.790 { 00:26:16.790 "nsid": 1, 00:26:16.790 "bdev_name": "Nvme0n1", 00:26:16.790 "name": "Nvme0n1", 00:26:16.790 "nguid": "F6E38ABDD59F49ADB486A29D3D487BA0", 00:26:16.790 "uuid": "f6e38abd-d59f-49ad-b486-a29d3d487ba0" 00:26:16.790 } 00:26:16.790 ] 00:26:16.790 } 00:26:16.790 ] 00:26:16.790 01:02:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.790 01:02:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:16.790 01:02:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:16.790 01:02:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:16.790 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:16.790 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:16.790 01:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:16.790 rmmod nvme_tcp 00:26:16.790 rmmod nvme_fabrics 00:26:16.790 rmmod nvme_keyring 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2756070 ']' 00:26:16.790 01:02:51 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2756070 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2756070 ']' 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2756070 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756070 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756070' 00:26:16.790 killing process with pid 2756070 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2756070 00:26:16.790 01:02:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2756070 00:26:18.696 01:02:52 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:18.696 01:02:52 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:18.696 01:02:52 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:18.696 01:02:52 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.696 01:02:52 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.696 01:02:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.696 01:02:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:18.696 01:02:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.599 01:02:55 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:20.599 00:26:20.599 real 0m18.153s 00:26:20.599 user 0m26.938s 00:26:20.599 sys 0m2.352s 00:26:20.599 01:02:55 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:20.599 01:02:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.599 ************************************ 00:26:20.599 END TEST nvmf_identify_passthru 00:26:20.599 ************************************ 00:26:20.599 01:02:55 -- common/autotest_common.sh@1142 -- # return 0 00:26:20.599 01:02:55 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:20.599 01:02:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:20.599 01:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.599 01:02:55 -- common/autotest_common.sh@10 -- # set +x 00:26:20.599 ************************************ 00:26:20.599 START TEST nvmf_dif 00:26:20.599 ************************************ 00:26:20.599 01:02:55 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:20.599 * Looking for test storage... 00:26:20.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.599 01:02:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.599 01:02:55 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.599 01:02:55 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.599 01:02:55 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.599 01:02:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.599 01:02:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.599 01:02:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.599 01:02:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:20.599 01:02:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.599 01:02:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:20.599 01:02:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:20.599 01:02:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:20.599 01:02:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:20.599 01:02:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:20.599 01:02:55 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.600 01:02:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:20.600 01:02:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.600 01:02:55 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.600 01:02:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.502 01:02:57 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.502 01:02:57 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.502 01:02:57 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:22.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:22.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:22.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:22.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:26:22.503 00:26:22.503 --- 10.0.0.2 ping statistics --- 00:26:22.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.503 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:26:22.503 00:26:22.503 --- 10.0.0.1 ping statistics --- 00:26:22.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.503 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:22.503 01:02:57 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:23.879 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:23.879 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:23.879 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:23.879 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:23.879 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:23.879 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:23.879 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:23.879 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:23.879 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:23.879 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:23.879 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:23.879 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:23.879 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:23.879 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:23.879 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:23.879 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:23.879 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.879 01:02:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:23.879 01:02:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.879 01:02:58 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:23.879 01:02:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2759228 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:23.879 01:02:58 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2759228 00:26:23.879 01:02:58 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2759228 ']' 00:26:23.879 01:02:58 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.879 01:02:58 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.880 01:02:58 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.880 01:02:58 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.880 01:02:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.880 [2024-07-16 01:02:58.537327] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:26:23.880 [2024-07-16 01:02:58.537417] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.880 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.880 [2024-07-16 01:02:58.604644] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.140 [2024-07-16 01:02:58.714812] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.140 [2024-07-16 01:02:58.714875] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.140 [2024-07-16 01:02:58.714911] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.140 [2024-07-16 01:02:58.714922] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.140 [2024-07-16 01:02:58.714931] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.140 [2024-07-16 01:02:58.714973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:24.140 01:02:58 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:24.140 01:02:58 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.140 01:02:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:24.140 01:02:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:24.140 [2024-07-16 01:02:58.863280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.140 01:02:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.140 01:02:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:24.140 ************************************ 00:26:24.140 START TEST fio_dif_1_default 00:26:24.140 ************************************ 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.140 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:24.399 bdev_null0 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:24.399 [2024-07-16 01:02:58.923569] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:24.399 { 00:26:24.399 "params": { 00:26:24.399 "name": "Nvme$subsystem", 00:26:24.399 "trtype": "$TEST_TRANSPORT", 00:26:24.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.399 "adrfam": "ipv4", 00:26:24.399 "trsvcid": "$NVMF_PORT", 00:26:24.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.399 "hdgst": ${hdgst:-false}, 00:26:24.399 "ddgst": ${ddgst:-false} 00:26:24.399 }, 00:26:24.399 "method": "bdev_nvme_attach_controller" 00:26:24.399 } 00:26:24.399 EOF 00:26:24.399 )") 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:24.399 01:02:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:24.399 "params": { 00:26:24.399 "name": "Nvme0", 00:26:24.399 "trtype": "tcp", 00:26:24.399 "traddr": "10.0.0.2", 00:26:24.399 "adrfam": "ipv4", 00:26:24.399 "trsvcid": "4420", 00:26:24.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:24.400 "hdgst": false, 00:26:24.400 "ddgst": false 00:26:24.400 }, 00:26:24.400 "method": "bdev_nvme_attach_controller" 00:26:24.400 }' 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:24.400 01:02:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.661 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:24.661 fio-3.35 00:26:24.661 Starting 1 thread 00:26:24.661 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.861 00:26:36.861 filename0: (groupid=0, jobs=1): err= 0: pid=2759531: Tue Jul 16 01:03:09 2024 00:26:36.861 read: IOPS=187, BW=748KiB/s (766kB/s)(7488KiB/10008msec) 00:26:36.861 slat (nsec): min=4751, max=68621, avg=9498.04, stdev=3055.58 00:26:36.861 clat (usec): min=803, max=44543, avg=21354.21, stdev=20278.85 00:26:36.862 lat (usec): min=811, max=44558, avg=21363.70, stdev=20278.68 00:26:36.862 clat percentiles (usec): 00:26:36.862 | 1.00th=[ 840], 5.00th=[ 865], 10.00th=[ 873], 20.00th=[ 889], 00:26:36.862 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:26:36.862 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:26:36.862 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:26:36.862 | 99.99th=[44303] 00:26:36.862 bw ( KiB/s): min= 672, max= 768, per=99.84%, avg=747.20, stdev=34.86, samples=20 00:26:36.862 iops : min= 168, max= 192, avg=186.80, stdev= 8.72, samples=20 00:26:36.862 lat (usec) : 1000=47.97% 00:26:36.862 lat (msec) : 2=1.60%, 50=50.43% 00:26:36.862 cpu : usr=89.18%, sys=10.44%, ctx=12, majf=0, minf=181 00:26:36.862 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.862 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.862 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:36.862 00:26:36.862 Run status group 0 (all jobs): 00:26:36.862 READ: bw=748KiB/s (766kB/s), 748KiB/s-748KiB/s (766kB/s-766kB/s), io=7488KiB (7668kB), run=10008-10008msec 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 00:26:36.862 real 0m11.216s 00:26:36.862 user 0m10.091s 00:26:36.862 sys 0m1.312s 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 ************************************ 00:26:36.862 END TEST fio_dif_1_default 00:26:36.862 ************************************ 00:26:36.862 01:03:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:36.862 01:03:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:36.862 01:03:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:36.862 01:03:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 ************************************ 00:26:36.862 START TEST fio_dif_1_multi_subsystems 00:26:36.862 ************************************ 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 bdev_null0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 [2024-07-16 01:03:10.193607] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 bdev_null1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.862 { 00:26:36.862 "params": { 00:26:36.862 "name": "Nvme$subsystem", 00:26:36.862 "trtype": "$TEST_TRANSPORT", 00:26:36.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.862 "adrfam": "ipv4", 00:26:36.862 "trsvcid": "$NVMF_PORT", 00:26:36.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.862 "hdgst": ${hdgst:-false}, 00:26:36.862 "ddgst": ${ddgst:-false} 00:26:36.862 }, 00:26:36.862 "method": "bdev_nvme_attach_controller" 00:26:36.862 } 00:26:36.862 EOF 00:26:36.862 )") 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.862 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.862 { 00:26:36.862 "params": { 00:26:36.862 "name": "Nvme$subsystem", 00:26:36.862 "trtype": "$TEST_TRANSPORT", 00:26:36.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.862 "adrfam": "ipv4", 00:26:36.863 "trsvcid": "$NVMF_PORT", 00:26:36.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.863 "hdgst": ${hdgst:-false}, 00:26:36.863 "ddgst": ${ddgst:-false} 00:26:36.863 }, 00:26:36.863 "method": "bdev_nvme_attach_controller" 00:26:36.863 } 00:26:36.863 EOF 00:26:36.863 )") 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:36.863 "params": { 00:26:36.863 "name": "Nvme0", 00:26:36.863 "trtype": "tcp", 00:26:36.863 "traddr": "10.0.0.2", 00:26:36.863 "adrfam": "ipv4", 00:26:36.863 "trsvcid": "4420", 00:26:36.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:36.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:36.863 "hdgst": false, 00:26:36.863 "ddgst": false 00:26:36.863 }, 00:26:36.863 "method": "bdev_nvme_attach_controller" 00:26:36.863 },{ 00:26:36.863 "params": { 00:26:36.863 "name": "Nvme1", 00:26:36.863 "trtype": "tcp", 00:26:36.863 "traddr": "10.0.0.2", 00:26:36.863 "adrfam": "ipv4", 00:26:36.863 "trsvcid": "4420", 00:26:36.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.863 "hdgst": false, 00:26:36.863 "ddgst": false 00:26:36.863 }, 00:26:36.863 "method": "bdev_nvme_attach_controller" 00:26:36.863 }' 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:36.863 01:03:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.863 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:36.863 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:36.863 fio-3.35 00:26:36.863 Starting 2 threads 00:26:36.863 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.848 00:26:46.848 filename0: (groupid=0, jobs=1): err= 0: pid=2761580: Tue Jul 16 01:03:21 2024 00:26:46.848 read: IOPS=96, BW=385KiB/s (395kB/s)(3856KiB/10008msec) 00:26:46.848 slat (nsec): min=4691, max=24090, avg=10038.24, stdev=3540.75 00:26:46.848 clat (usec): min=40896, max=47608, avg=41494.78, stdev=640.53 00:26:46.848 lat (usec): min=40904, max=47621, avg=41504.82, stdev=640.55 00:26:46.848 clat percentiles (usec): 00:26:46.848 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:46.848 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:26:46.848 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:46.848 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:26:46.848 | 99.99th=[47449] 00:26:46.848 bw ( KiB/s): min= 352, max= 416, per=34.02%, avg=384.00, stdev=10.38, samples=20 00:26:46.848 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:26:46.848 lat (msec) : 50=100.00% 00:26:46.848 cpu : usr=91.55%, sys=6.25%, ctx=33, majf=0, minf=83 00:26:46.848 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:46.848 filename1: (groupid=0, jobs=1): err= 0: pid=2761581: Tue Jul 16 01:03:21 2024 00:26:46.848 read: IOPS=186, BW=745KiB/s (762kB/s)(7472KiB/10035msec) 00:26:46.848 slat (nsec): min=4546, max=28648, avg=9324.77, stdev=2439.79 00:26:46.848 clat (usec): min=822, max=46661, avg=21459.04, stdev=20530.69 00:26:46.848 lat (usec): min=830, max=46675, avg=21468.36, stdev=20530.46 00:26:46.848 clat percentiles (usec): 00:26:46.848 | 1.00th=[ 848], 5.00th=[ 865], 10.00th=[ 873], 20.00th=[ 889], 00:26:46.848 | 30.00th=[ 898], 40.00th=[ 914], 50.00th=[41157], 60.00th=[42206], 00:26:46.848 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:46.848 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:26:46.848 | 99.99th=[46400] 00:26:46.848 bw ( KiB/s): min= 704, max= 768, per=66.00%, avg=745.60, stdev=31.32, samples=20 00:26:46.848 iops : min= 176, max= 192, avg=186.40, stdev= 7.83, samples=20 00:26:46.848 lat (usec) : 1000=49.68% 00:26:46.848 lat (msec) : 2=0.21%, 50=50.11% 00:26:46.848 cpu : usr=94.16%, sys=5.56%, ctx=26, majf=0, minf=177 00:26:46.848 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:46.848 00:26:46.848 Run status group 0 (all jobs): 00:26:46.848 READ: bw=1129KiB/s (1156kB/s), 385KiB/s-745KiB/s (395kB/s-762kB/s), io=11.1MiB (11.6MB), run=10008-10035msec 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.107 00:26:47.107 real 0m11.499s 00:26:47.107 user 0m20.196s 00:26:47.107 sys 0m1.461s 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.107 01:03:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:47.107 ************************************ 00:26:47.107 END TEST fio_dif_1_multi_subsystems 00:26:47.107 ************************************ 00:26:47.107 01:03:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:47.107 01:03:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:47.107 01:03:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:47.107 01:03:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.107 01:03:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:47.107 ************************************ 00:26:47.107 START TEST fio_dif_rand_params 00:26:47.107 ************************************ 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:47.107 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.108 bdev_null0 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.108 [2024-07-16 01:03:21.746788] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:47.108 { 00:26:47.108 "params": { 00:26:47.108 "name": "Nvme$subsystem", 00:26:47.108 "trtype": "$TEST_TRANSPORT", 00:26:47.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.108 "adrfam": "ipv4", 00:26:47.108 "trsvcid": "$NVMF_PORT", 00:26:47.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.108 "hdgst": ${hdgst:-false}, 00:26:47.108 "ddgst": ${ddgst:-false} 00:26:47.108 }, 00:26:47.108 "method": "bdev_nvme_attach_controller" 00:26:47.108 } 00:26:47.108 EOF 00:26:47.108 )") 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:47.108 "params": { 00:26:47.108 "name": "Nvme0", 00:26:47.108 "trtype": "tcp", 00:26:47.108 "traddr": "10.0.0.2", 00:26:47.108 "adrfam": "ipv4", 00:26:47.108 "trsvcid": "4420", 00:26:47.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:47.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:47.108 "hdgst": false, 00:26:47.108 "ddgst": false 00:26:47.108 }, 00:26:47.108 "method": "bdev_nvme_attach_controller" 00:26:47.108 }' 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:47.108 01:03:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.367 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:47.368 ... 00:26:47.368 fio-3.35 00:26:47.368 Starting 3 threads 00:26:47.368 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.934 00:26:53.934 filename0: (groupid=0, jobs=1): err= 0: pid=2762972: Tue Jul 16 01:03:27 2024 00:26:53.934 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(125MiB/5009msec) 00:26:53.934 slat (nsec): min=5160, max=46017, avg=14510.92, stdev=5329.54 00:26:53.934 clat (usec): min=5429, max=90530, avg=15065.26, stdev=13511.16 00:26:53.934 lat (usec): min=5441, max=90543, avg=15079.77, stdev=13511.17 00:26:53.934 clat percentiles (usec): 00:26:53.934 | 1.00th=[ 6194], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 8848], 00:26:53.934 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11600], 00:26:53.934 | 70.00th=[12780], 80.00th=[13960], 90.00th=[48497], 95.00th=[51643], 00:26:53.934 | 99.00th=[54264], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:26:53.934 | 99.99th=[90702] 00:26:53.934 bw ( KiB/s): min=17664, max=37888, per=36.04%, avg=25420.80, stdev=6616.55, samples=10 00:26:53.934 iops : min= 138, max= 296, avg=198.60, stdev=51.69, samples=10 00:26:53.934 lat (msec) : 10=40.86%, 20=48.49%, 50=2.01%, 100=8.63% 00:26:53.934 cpu : usr=92.51%, sys=6.99%, ctx=16, majf=0, minf=121 00:26:53.934 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.934 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.934 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:53.934 filename0: (groupid=0, jobs=1): err= 0: pid=2762973: Tue Jul 16 01:03:27 2024 00:26:53.934 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(136MiB/5004msec) 00:26:53.934 slat (usec): min=5, max=113, avg=13.22, stdev= 5.84 00:26:53.934 clat (usec): min=5738, max=90784, avg=13737.32, stdev=12238.09 00:26:53.934 lat (usec): min=5750, max=90796, avg=13750.53, stdev=12238.39 00:26:53.934 clat percentiles (usec): 00:26:53.935 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7832], 00:26:53.935 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10945], 00:26:53.935 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14877], 95.00th=[51119], 00:26:53.935 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[90702], 00:26:53.935 | 99.99th=[90702] 00:26:53.935 bw ( KiB/s): min=19200, max=37120, per=39.49%, avg=27852.80, stdev=6363.83, samples=10 00:26:53.935 iops : min= 150, max= 290, avg=217.60, stdev=49.72, samples=10 00:26:53.935 lat (msec) : 10=49.40%, 20=41.52%, 50=2.47%, 100=6.60% 00:26:53.935 cpu : usr=92.50%, sys=6.94%, ctx=12, majf=0, minf=152 00:26:53.935 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.935 issued rwts: total=1091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:53.935 filename0: (groupid=0, jobs=1): err= 0: pid=2762974: Tue Jul 16 01:03:27 2024 00:26:53.935 read: IOPS=136, BW=17.0MiB/s (17.8MB/s)(85.6MiB/5031msec) 00:26:53.935 slat (nsec): min=5862, max=55110, avg=18720.13, stdev=6087.00 00:26:53.935 clat (usec): min=5301, max=94352, avg=22007.26, stdev=18241.87 00:26:53.935 lat (usec): min=5315, max=94371, avg=22025.98, stdev=18242.26 00:26:53.935 clat percentiles (usec): 00:26:53.935 | 1.00th=[ 5932], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[10159], 00:26:53.935 | 30.00th=[11600], 40.00th=[13042], 50.00th=[14091], 60.00th=[14877], 00:26:53.935 | 70.00th=[16188], 80.00th=[50594], 90.00th=[53740], 95.00th=[55313], 00:26:53.935 | 99.00th=[59507], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:26:53.935 | 99.99th=[93848] 00:26:53.935 bw ( KiB/s): min= 9984, max=25600, per=24.76%, avg=17462.70, stdev=3844.58, samples=10 00:26:53.935 iops : min= 78, max= 200, avg=136.40, stdev=30.03, samples=10 00:26:53.935 lat (msec) : 10=19.12%, 20=58.10%, 50=1.61%, 100=21.17% 00:26:53.935 cpu : usr=94.53%, sys=5.01%, ctx=9, majf=0, minf=59 00:26:53.935 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.935 issued rwts: total=685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:53.935 00:26:53.935 Run status group 0 (all jobs): 00:26:53.935 READ: bw=68.9MiB/s (72.2MB/s), 17.0MiB/s-27.3MiB/s (17.8MB/s-28.6MB/s), io=347MiB (363MB), run=5004-5031msec 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 bdev_null0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 [2024-07-16 01:03:27.893570] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 bdev_null1 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 bdev_null2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:53.935 { 00:26:53.935 "params": { 00:26:53.935 "name": "Nvme$subsystem", 00:26:53.935 "trtype": "$TEST_TRANSPORT", 00:26:53.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.935 "adrfam": "ipv4", 00:26:53.935 "trsvcid": "$NVMF_PORT", 00:26:53.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.935 "hdgst": ${hdgst:-false}, 00:26:53.935 "ddgst": ${ddgst:-false} 00:26:53.935 }, 00:26:53.935 "method": "bdev_nvme_attach_controller" 00:26:53.935 } 00:26:53.935 EOF 00:26:53.935 )") 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:53.935 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:53.936 { 00:26:53.936 "params": { 00:26:53.936 "name": "Nvme$subsystem", 00:26:53.936 "trtype": "$TEST_TRANSPORT", 00:26:53.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.936 "adrfam": "ipv4", 00:26:53.936 "trsvcid": "$NVMF_PORT", 00:26:53.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.936 "hdgst": ${hdgst:-false}, 00:26:53.936 "ddgst": ${ddgst:-false} 00:26:53.936 }, 00:26:53.936 "method": "bdev_nvme_attach_controller" 00:26:53.936 } 00:26:53.936 EOF 00:26:53.936 )") 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:53.936 { 00:26:53.936 "params": { 00:26:53.936 "name": "Nvme$subsystem", 00:26:53.936 "trtype": "$TEST_TRANSPORT", 00:26:53.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.936 "adrfam": "ipv4", 00:26:53.936 "trsvcid": "$NVMF_PORT", 00:26:53.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.936 "hdgst": ${hdgst:-false}, 00:26:53.936 "ddgst": ${ddgst:-false} 00:26:53.936 }, 00:26:53.936 "method": "bdev_nvme_attach_controller" 00:26:53.936 } 00:26:53.936 EOF 00:26:53.936 )") 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:53.936 "params": { 00:26:53.936 "name": "Nvme0", 00:26:53.936 "trtype": "tcp", 00:26:53.936 "traddr": "10.0.0.2", 00:26:53.936 "adrfam": "ipv4", 00:26:53.936 "trsvcid": "4420", 00:26:53.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:53.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:53.936 "hdgst": false, 00:26:53.936 "ddgst": false 00:26:53.936 }, 00:26:53.936 "method": "bdev_nvme_attach_controller" 00:26:53.936 },{ 00:26:53.936 "params": { 00:26:53.936 "name": "Nvme1", 00:26:53.936 "trtype": "tcp", 00:26:53.936 "traddr": "10.0.0.2", 00:26:53.936 "adrfam": "ipv4", 00:26:53.936 "trsvcid": "4420", 00:26:53.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.936 "hdgst": false, 00:26:53.936 "ddgst": false 00:26:53.936 }, 00:26:53.936 "method": "bdev_nvme_attach_controller" 00:26:53.936 },{ 00:26:53.936 "params": { 00:26:53.936 "name": "Nvme2", 00:26:53.936 "trtype": "tcp", 00:26:53.936 "traddr": "10.0.0.2", 00:26:53.936 "adrfam": "ipv4", 00:26:53.936 "trsvcid": "4420", 00:26:53.936 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:53.936 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:53.936 "hdgst": false, 00:26:53.936 "ddgst": false 00:26:53.936 }, 00:26:53.936 "method": "bdev_nvme_attach_controller" 00:26:53.936 }' 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:53.936 01:03:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:53.936 01:03:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:53.936 01:03:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:53.936 01:03:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:53.936 01:03:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.936 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:53.936 ... 00:26:53.936 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:53.936 ... 00:26:53.936 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:53.936 ... 00:26:53.936 fio-3.35 00:26:53.936 Starting 24 threads 00:26:53.936 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.140 00:27:06.140 filename0: (groupid=0, jobs=1): err= 0: pid=2763726: Tue Jul 16 01:03:39 2024 00:27:06.140 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:27:06.140 slat (usec): min=8, max=157, avg=34.11, stdev=21.27 00:27:06.140 clat (usec): min=14708, max=56221, avg=33691.82, stdev=2029.52 00:27:06.140 lat (usec): min=14717, max=56267, avg=33725.93, stdev=2027.28 00:27:06.140 clat percentiles (usec): 00:27:06.140 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:27:06.140 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:27:06.140 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.140 | 99.00th=[41681], 99.50th=[45351], 99.90th=[56361], 99.95th=[56361], 00:27:06.140 | 99.99th=[56361] 00:27:06.140 bw ( KiB/s): min= 1667, max= 1936, per=4.16%, avg=1881.55, stdev=70.09, samples=20 00:27:06.140 iops : min= 416, max= 484, avg=470.35, stdev=17.64, samples=20 00:27:06.140 lat (msec) : 20=0.13%, 50=99.49%, 100=0.38% 00:27:06.140 cpu : usr=93.62%, sys=3.57%, ctx=751, majf=0, minf=9 00:27:06.140 IO depths : 1=2.3%, 2=8.5%, 4=24.9%, 8=54.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:27:06.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.140 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.140 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.140 filename0: (groupid=0, jobs=1): err= 0: pid=2763727: Tue Jul 16 01:03:39 2024 00:27:06.140 read: IOPS=469, BW=1878KiB/s (1923kB/s)(18.3MiB/10004msec) 00:27:06.140 slat (usec): min=8, max=1764, avg=30.30, stdev=40.23 00:27:06.140 clat (usec): min=3205, max=73347, avg=33869.35, stdev=4098.41 00:27:06.140 lat (usec): min=3216, max=73387, avg=33899.64, stdev=4099.12 00:27:06.140 clat percentiles (usec): 00:27:06.140 | 1.00th=[23200], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:27:06.140 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:27:06.140 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[36963], 00:27:06.140 | 99.00th=[53740], 99.50th=[61604], 99.90th=[65274], 99.95th=[72877], 00:27:06.140 | 99.99th=[72877] 00:27:06.140 bw ( KiB/s): min= 1632, max= 1920, per=4.13%, avg=1868.21, stdev=81.20, samples=19 00:27:06.140 iops : min= 408, max= 480, avg=467.05, stdev=20.30, samples=19 00:27:06.141 lat (msec) : 4=0.06%, 10=0.36%, 20=0.36%, 50=98.13%, 100=1.09% 00:27:06.141 cpu : usr=88.88%, sys=5.04%, ctx=121, majf=0, minf=9 00:27:06.141 IO depths : 1=0.3%, 2=5.4%, 4=21.2%, 8=60.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=93.6%, 8=1.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename0: (groupid=0, jobs=1): err= 0: pid=2763728: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:27:06.141 slat (nsec): min=6382, max=93774, avg=19404.24, stdev=10742.77 00:27:06.141 clat (usec): min=11733, max=43787, avg=33534.87, stdev=2428.06 00:27:06.141 lat (usec): min=11767, max=43814, avg=33554.27, stdev=2427.44 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[24249], 5.00th=[31851], 10.00th=[32900], 20.00th=[33162], 00:27:06.141 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:27:06.141 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:27:06.141 | 99.99th=[43779] 00:27:06.141 bw ( KiB/s): min= 1792, max= 1976, per=4.20%, avg=1899.58, stdev=50.89, samples=19 00:27:06.141 iops : min= 448, max= 494, avg=474.89, stdev=12.72, samples=19 00:27:06.141 lat (msec) : 20=0.67%, 50=99.33% 00:27:06.141 cpu : usr=97.08%, sys=1.89%, ctx=38, majf=0, minf=9 00:27:06.141 IO depths : 1=4.5%, 2=10.5%, 4=23.7%, 8=53.3%, 16=8.0%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename0: (groupid=0, jobs=1): err= 0: pid=2763729: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10021msec) 00:27:06.141 slat (nsec): min=6725, max=96200, avg=46200.03, stdev=13975.98 00:27:06.141 clat (usec): min=22773, max=71509, avg=33562.72, stdev=2064.72 00:27:06.141 lat (usec): min=22795, max=71522, avg=33608.92, stdev=2063.32 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:27:06.141 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.141 | 99.00th=[40633], 99.50th=[41681], 99.90th=[62653], 99.95th=[62653], 00:27:06.141 | 99.99th=[71828] 00:27:06.141 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:27:06.141 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:27:06.141 lat (msec) : 50=99.66%, 100=0.34% 00:27:06.141 cpu : usr=97.18%, sys=1.70%, ctx=179, majf=0, minf=9 00:27:06.141 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename0: (groupid=0, jobs=1): err= 0: pid=2763730: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10013msec) 00:27:06.141 slat (usec): min=4, max=100, avg=44.63, stdev=15.13 00:27:06.141 clat (usec): min=14411, max=42156, avg=33446.58, stdev=1526.07 00:27:06.141 lat (usec): min=14422, max=42194, avg=33491.22, stdev=1525.94 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[31327], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:27:06.141 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.141 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:06.141 | 99.99th=[42206] 00:27:06.141 bw ( KiB/s): min= 1788, max= 1920, per=4.17%, avg=1887.80, stdev=57.23, samples=20 00:27:06.141 iops : min= 447, max= 480, avg=471.95, stdev=14.31, samples=20 00:27:06.141 lat (msec) : 20=0.34%, 50=99.66% 00:27:06.141 cpu : usr=96.64%, sys=2.25%, ctx=205, majf=0, minf=9 00:27:06.141 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename0: (groupid=0, jobs=1): err= 0: pid=2763731: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10013msec) 00:27:06.141 slat (nsec): min=4128, max=86166, avg=32945.78, stdev=17243.31 00:27:06.141 clat (usec): min=10689, max=57675, avg=33577.10, stdev=2578.76 00:27:06.141 lat (usec): min=10716, max=57698, avg=33610.04, stdev=2578.85 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[26346], 5.00th=[32375], 10.00th=[32637], 20.00th=[33162], 00:27:06.141 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:27:06.141 | 99.00th=[41157], 99.50th=[42206], 99.90th=[57410], 99.95th=[57410], 00:27:06.141 | 99.99th=[57934] 00:27:06.141 bw ( KiB/s): min= 1788, max= 1920, per=4.17%, avg=1887.80, stdev=57.23, samples=20 00:27:06.141 iops : min= 447, max= 480, avg=471.95, stdev=14.31, samples=20 00:27:06.141 lat (msec) : 20=0.80%, 50=98.86%, 100=0.34% 00:27:06.141 cpu : usr=97.63%, sys=1.62%, ctx=78, majf=0, minf=9 00:27:06.141 IO depths : 1=4.3%, 2=8.7%, 4=18.9%, 8=59.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=92.4%, 8=1.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename0: (groupid=0, jobs=1): err= 0: pid=2763732: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.5MiB/10016msec) 00:27:06.141 slat (usec): min=3, max=107, avg=42.42, stdev=21.47 00:27:06.141 clat (usec): min=18582, max=66778, avg=33569.20, stdev=2808.55 00:27:06.141 lat (usec): min=18590, max=66790, avg=33611.61, stdev=2807.66 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[25822], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.141 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.141 | 99.00th=[41157], 99.50th=[60031], 99.90th=[66847], 99.95th=[66847], 00:27:06.141 | 99.99th=[66847] 00:27:06.141 bw ( KiB/s): min= 1667, max= 1968, per=4.16%, avg=1882.26, stdev=73.84, samples=19 00:27:06.141 iops : min= 416, max= 492, avg=470.53, stdev=18.58, samples=19 00:27:06.141 lat (msec) : 20=0.34%, 50=99.11%, 100=0.55% 00:27:06.141 cpu : usr=96.76%, sys=1.96%, ctx=154, majf=0, minf=9 00:27:06.141 IO depths : 1=0.7%, 2=6.5%, 4=23.3%, 8=57.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=94.0%, 8=0.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename0: (groupid=0, jobs=1): err= 0: pid=2763733: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10016msec) 00:27:06.141 slat (nsec): min=4187, max=96662, avg=49439.04, stdev=15550.50 00:27:06.141 clat (usec): min=23801, max=58874, avg=33483.49, stdev=1849.55 00:27:06.141 lat (usec): min=23824, max=58892, avg=33532.93, stdev=1848.32 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.141 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:27:06.141 | 99.00th=[40633], 99.50th=[41681], 99.90th=[58983], 99.95th=[58983], 00:27:06.141 | 99.99th=[58983] 00:27:06.141 bw ( KiB/s): min= 1660, max= 1920, per=4.15%, avg=1879.37, stdev=75.19, samples=19 00:27:06.141 iops : min= 415, max= 480, avg=469.84, stdev=18.80, samples=19 00:27:06.141 lat (msec) : 50=99.66%, 100=0.34% 00:27:06.141 cpu : usr=92.48%, sys=3.61%, ctx=251, majf=0, minf=9 00:27:06.141 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename1: (groupid=0, jobs=1): err= 0: pid=2763734: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10019msec) 00:27:06.141 slat (usec): min=6, max=109, avg=34.66, stdev=15.90 00:27:06.141 clat (usec): min=11703, max=42007, avg=33464.08, stdev=1856.16 00:27:06.141 lat (usec): min=11714, max=42026, avg=33498.75, stdev=1856.08 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[25560], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:27:06.141 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.141 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:27:06.141 | 99.99th=[42206] 00:27:06.141 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1894.20, stdev=52.44, samples=20 00:27:06.141 iops : min= 448, max= 480, avg=473.55, stdev=13.11, samples=20 00:27:06.141 lat (msec) : 20=0.67%, 50=99.33% 00:27:06.141 cpu : usr=92.55%, sys=3.70%, ctx=157, majf=0, minf=9 00:27:06.141 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.141 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.141 filename1: (groupid=0, jobs=1): err= 0: pid=2763735: Tue Jul 16 01:03:39 2024 00:27:06.141 read: IOPS=474, BW=1898KiB/s (1943kB/s)(18.6MiB/10017msec) 00:27:06.141 slat (usec): min=4, max=114, avg=45.64, stdev=20.16 00:27:06.141 clat (usec): min=11826, max=42133, avg=33383.63, stdev=1909.49 00:27:06.141 lat (usec): min=11860, max=42172, avg=33429.27, stdev=1909.65 00:27:06.141 clat percentiles (usec): 00:27:06.141 | 1.00th=[25560], 5.00th=[32375], 10.00th=[32900], 20.00th=[32900], 00:27:06.141 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:27:06.141 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.141 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:06.141 | 99.99th=[42206] 00:27:06.142 bw ( KiB/s): min= 1792, max= 1923, per=4.19%, avg=1894.55, stdev=52.61, samples=20 00:27:06.142 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:27:06.142 lat (msec) : 20=0.67%, 50=99.33% 00:27:06.142 cpu : usr=96.62%, sys=1.98%, ctx=131, majf=0, minf=9 00:27:06.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename1: (groupid=0, jobs=1): err= 0: pid=2763736: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10020msec) 00:27:06.142 slat (usec): min=5, max=106, avg=48.47, stdev=21.36 00:27:06.142 clat (usec): min=19746, max=52267, avg=33418.29, stdev=1509.35 00:27:06.142 lat (usec): min=19779, max=52280, avg=33466.76, stdev=1509.77 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.142 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:27:06.142 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:27:06.142 | 99.00th=[40109], 99.50th=[41157], 99.90th=[52167], 99.95th=[52167], 00:27:06.142 | 99.99th=[52167] 00:27:06.142 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1888.00, stdev=56.87, samples=20 00:27:06.142 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:27:06.142 lat (msec) : 20=0.11%, 50=99.70%, 100=0.19% 00:27:06.142 cpu : usr=97.77%, sys=1.69%, ctx=57, majf=0, minf=9 00:27:06.142 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename1: (groupid=0, jobs=1): err= 0: pid=2763737: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10007msec) 00:27:06.142 slat (usec): min=10, max=210, avg=41.56, stdev=26.89 00:27:06.142 clat (usec): min=23128, max=58234, avg=33509.43, stdev=1848.97 00:27:06.142 lat (usec): min=23148, max=58261, avg=33550.99, stdev=1847.44 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.142 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:27:06.142 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:27:06.142 | 99.00th=[40109], 99.50th=[42730], 99.90th=[57934], 99.95th=[58459], 00:27:06.142 | 99.99th=[58459] 00:27:06.142 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1879.37, stdev=74.43, samples=19 00:27:06.142 iops : min= 416, max= 480, avg=469.84, stdev=18.61, samples=19 00:27:06.142 lat (msec) : 50=99.66%, 100=0.34% 00:27:06.142 cpu : usr=98.17%, sys=1.38%, ctx=13, majf=0, minf=9 00:27:06.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename1: (groupid=0, jobs=1): err= 0: pid=2763738: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=469, BW=1878KiB/s (1923kB/s)(18.4MiB/10012msec) 00:27:06.142 slat (usec): min=6, max=126, avg=44.66, stdev=25.20 00:27:06.142 clat (usec): min=6218, max=61654, avg=33734.21, stdev=3259.28 00:27:06.142 lat (usec): min=6237, max=61671, avg=33778.87, stdev=3257.75 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[23987], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.142 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:27:06.142 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:27:06.142 | 99.00th=[54264], 99.50th=[56886], 99.90th=[61604], 99.95th=[61604], 00:27:06.142 | 99.99th=[61604] 00:27:06.142 bw ( KiB/s): min= 1664, max= 1976, per=4.14%, avg=1871.16, stdev=74.84, samples=19 00:27:06.142 iops : min= 416, max= 494, avg=467.79, stdev=18.71, samples=19 00:27:06.142 lat (msec) : 10=0.17%, 50=98.77%, 100=1.06% 00:27:06.142 cpu : usr=98.08%, sys=1.49%, ctx=15, majf=0, minf=9 00:27:06.142 IO depths : 1=0.6%, 2=5.8%, 4=21.0%, 8=60.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=93.6%, 8=1.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename1: (groupid=0, jobs=1): err= 0: pid=2763739: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.6MiB/10019msec) 00:27:06.142 slat (usec): min=8, max=103, avg=16.70, stdev=11.64 00:27:06.142 clat (usec): min=9042, max=58585, avg=33613.42, stdev=3704.31 00:27:06.142 lat (usec): min=9064, max=58595, avg=33630.12, stdev=3704.23 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[13173], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:27:06.142 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:27:06.142 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.142 | 99.00th=[43254], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:27:06.142 | 99.99th=[58459] 00:27:06.142 bw ( KiB/s): min= 1808, max= 1968, per=4.19%, avg=1894.35, stdev=47.55, samples=20 00:27:06.142 iops : min= 452, max= 492, avg=473.55, stdev=11.88, samples=20 00:27:06.142 lat (msec) : 10=0.25%, 20=1.26%, 50=97.60%, 100=0.88% 00:27:06.142 cpu : usr=98.32%, sys=1.29%, ctx=19, majf=0, minf=9 00:27:06.142 IO depths : 1=2.0%, 2=8.0%, 4=24.2%, 8=55.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename1: (groupid=0, jobs=1): err= 0: pid=2763740: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10020msec) 00:27:06.142 slat (usec): min=8, max=131, avg=44.35, stdev=26.18 00:27:06.142 clat (usec): min=20174, max=52225, avg=33614.24, stdev=1748.81 00:27:06.142 lat (usec): min=20186, max=52259, avg=33658.59, stdev=1746.08 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[31589], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:27:06.142 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:27:06.142 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.142 | 99.00th=[41681], 99.50th=[42730], 99.90th=[52167], 99.95th=[52167], 00:27:06.142 | 99.99th=[52167] 00:27:06.142 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=71.82, samples=20 00:27:06.142 iops : min= 416, max= 480, avg=470.40, stdev=17.95, samples=20 00:27:06.142 lat (msec) : 50=99.66%, 100=0.34% 00:27:06.142 cpu : usr=98.44%, sys=1.16%, ctx=13, majf=0, minf=9 00:27:06.142 IO depths : 1=4.5%, 2=10.8%, 4=24.9%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename1: (groupid=0, jobs=1): err= 0: pid=2763741: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10007msec) 00:27:06.142 slat (usec): min=6, max=130, avg=45.14, stdev=26.29 00:27:06.142 clat (usec): min=11842, max=76891, avg=33385.23, stdev=3123.33 00:27:06.142 lat (usec): min=11857, max=76911, avg=33430.37, stdev=3124.23 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[23462], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:27:06.142 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:27:06.142 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.142 | 99.00th=[43779], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:27:06.142 | 99.99th=[77071] 00:27:06.142 bw ( KiB/s): min= 1763, max= 2064, per=4.17%, avg=1885.42, stdev=73.38, samples=19 00:27:06.142 iops : min= 440, max= 516, avg=471.32, stdev=18.42, samples=19 00:27:06.142 lat (msec) : 20=0.17%, 50=99.20%, 100=0.63% 00:27:06.142 cpu : usr=98.29%, sys=1.32%, ctx=17, majf=0, minf=9 00:27:06.142 IO depths : 1=4.8%, 2=9.7%, 4=20.3%, 8=56.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=93.1%, 8=2.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename2: (groupid=0, jobs=1): err= 0: pid=2763742: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:27:06.142 slat (usec): min=8, max=149, avg=49.67, stdev=26.26 00:27:06.142 clat (usec): min=23616, max=61651, avg=33588.23, stdev=1669.69 00:27:06.142 lat (usec): min=23628, max=61690, avg=33637.89, stdev=1665.74 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:27:06.142 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:27:06.142 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.142 | 99.00th=[41157], 99.50th=[41681], 99.90th=[51643], 99.95th=[52167], 00:27:06.142 | 99.99th=[61604] 00:27:06.142 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:27:06.142 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:27:06.142 lat (msec) : 50=99.66%, 100=0.34% 00:27:06.142 cpu : usr=98.35%, sys=1.26%, ctx=13, majf=0, minf=9 00:27:06.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:06.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.142 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.142 filename2: (groupid=0, jobs=1): err= 0: pid=2763743: Tue Jul 16 01:03:39 2024 00:27:06.142 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10020msec) 00:27:06.142 slat (usec): min=8, max=143, avg=52.25, stdev=27.14 00:27:06.142 clat (usec): min=20966, max=62306, avg=33458.05, stdev=2008.82 00:27:06.142 lat (usec): min=20975, max=62346, avg=33510.30, stdev=2008.56 00:27:06.142 clat percentiles (usec): 00:27:06.142 | 1.00th=[29230], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.142 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:06.142 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.142 | 99.00th=[41681], 99.50th=[44827], 99.90th=[52167], 99.95th=[62129], 00:27:06.142 | 99.99th=[62129] 00:27:06.142 bw ( KiB/s): min= 1664, max= 1952, per=4.16%, avg=1881.60, stdev=74.03, samples=20 00:27:06.142 iops : min= 416, max= 488, avg=470.40, stdev=18.51, samples=20 00:27:06.142 lat (msec) : 50=99.58%, 100=0.42% 00:27:06.143 cpu : usr=98.52%, sys=1.08%, ctx=13, majf=0, minf=9 00:27:06.143 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:27:06.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.143 filename2: (groupid=0, jobs=1): err= 0: pid=2763744: Tue Jul 16 01:03:39 2024 00:27:06.143 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10019msec) 00:27:06.143 slat (usec): min=8, max=129, avg=30.84, stdev=17.02 00:27:06.143 clat (usec): min=11753, max=50586, avg=33501.66, stdev=2160.86 00:27:06.143 lat (usec): min=11782, max=50607, avg=33532.50, stdev=2161.21 00:27:06.143 clat percentiles (usec): 00:27:06.143 | 1.00th=[23725], 5.00th=[31851], 10.00th=[32900], 20.00th=[33162], 00:27:06.143 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:27:06.143 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:27:06.143 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[44827], 00:27:06.143 | 99.99th=[50594] 00:27:06.143 bw ( KiB/s): min= 1792, max= 1923, per=4.19%, avg=1894.35, stdev=52.52, samples=20 00:27:06.143 iops : min= 448, max= 480, avg=473.55, stdev=13.11, samples=20 00:27:06.143 lat (msec) : 20=0.67%, 50=99.28%, 100=0.04% 00:27:06.143 cpu : usr=98.27%, sys=1.28%, ctx=14, majf=0, minf=9 00:27:06.143 IO depths : 1=4.8%, 2=11.0%, 4=24.9%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:27:06.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.143 filename2: (groupid=0, jobs=1): err= 0: pid=2763745: Tue Jul 16 01:03:39 2024 00:27:06.143 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10012msec) 00:27:06.143 slat (usec): min=6, max=138, avg=51.85, stdev=23.98 00:27:06.143 clat (usec): min=23754, max=54608, avg=33417.24, stdev=1666.75 00:27:06.143 lat (usec): min=23770, max=54640, avg=33469.09, stdev=1666.99 00:27:06.143 clat percentiles (usec): 00:27:06.143 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.143 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:06.143 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:27:06.143 | 99.00th=[40633], 99.50th=[41681], 99.90th=[54789], 99.95th=[54789], 00:27:06.143 | 99.99th=[54789] 00:27:06.143 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1879.16, stdev=74.32, samples=19 00:27:06.143 iops : min= 416, max= 480, avg=469.79, stdev=18.58, samples=19 00:27:06.143 lat (msec) : 50=99.66%, 100=0.34% 00:27:06.143 cpu : usr=98.41%, sys=1.20%, ctx=13, majf=0, minf=9 00:27:06.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:06.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.143 filename2: (groupid=0, jobs=1): err= 0: pid=2763746: Tue Jul 16 01:03:39 2024 00:27:06.143 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:27:06.143 slat (usec): min=8, max=148, avg=52.85, stdev=27.39 00:27:06.143 clat (usec): min=23668, max=52214, avg=33496.37, stdev=1689.35 00:27:06.143 lat (usec): min=23711, max=52246, avg=33549.23, stdev=1686.94 00:27:06.143 clat percentiles (usec): 00:27:06.143 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.143 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:27:06.143 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:06.143 | 99.00th=[41157], 99.50th=[42730], 99.90th=[52167], 99.95th=[52167], 00:27:06.143 | 99.99th=[52167] 00:27:06.143 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:27:06.143 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:27:06.143 lat (msec) : 50=99.66%, 100=0.34% 00:27:06.143 cpu : usr=98.18%, sys=1.44%, ctx=14, majf=0, minf=9 00:27:06.143 IO depths : 1=5.3%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:27:06.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.143 filename2: (groupid=0, jobs=1): err= 0: pid=2763747: Tue Jul 16 01:03:39 2024 00:27:06.143 read: IOPS=467, BW=1869KiB/s (1914kB/s)(18.3MiB/10009msec) 00:27:06.143 slat (usec): min=6, max=126, avg=41.92, stdev=23.95 00:27:06.143 clat (usec): min=11706, max=73719, avg=33894.66, stdev=3633.23 00:27:06.143 lat (usec): min=11728, max=73749, avg=33936.58, stdev=3630.70 00:27:06.143 clat percentiles (usec): 00:27:06.143 | 1.00th=[26084], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:06.143 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:27:06.143 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[37487], 00:27:06.143 | 99.00th=[47973], 99.50th=[57410], 99.90th=[73925], 99.95th=[73925], 00:27:06.143 | 99.99th=[73925] 00:27:06.143 bw ( KiB/s): min= 1648, max= 1920, per=4.11%, avg=1860.84, stdev=65.57, samples=19 00:27:06.143 iops : min= 412, max= 480, avg=465.21, stdev=16.39, samples=19 00:27:06.143 lat (msec) : 20=0.11%, 50=99.17%, 100=0.73% 00:27:06.143 cpu : usr=98.30%, sys=1.28%, ctx=13, majf=0, minf=9 00:27:06.143 IO depths : 1=1.0%, 2=6.4%, 4=22.6%, 8=58.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:27:06.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 issued rwts: total=4676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.143 filename2: (groupid=0, jobs=1): err= 0: pid=2763748: Tue Jul 16 01:03:39 2024 00:27:06.143 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10006msec) 00:27:06.143 slat (usec): min=14, max=215, avg=46.81, stdev=27.44 00:27:06.143 clat (usec): min=9755, max=76303, avg=33815.28, stdev=3599.56 00:27:06.143 lat (usec): min=9848, max=76333, avg=33862.09, stdev=3598.80 00:27:06.143 clat percentiles (usec): 00:27:06.143 | 1.00th=[24249], 5.00th=[32375], 10.00th=[32637], 20.00th=[33162], 00:27:06.143 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:27:06.143 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:27:06.143 | 99.00th=[51643], 99.50th=[57934], 99.90th=[63701], 99.95th=[76022], 00:27:06.143 | 99.99th=[76022] 00:27:06.143 bw ( KiB/s): min= 1651, max= 1920, per=4.13%, avg=1867.74, stdev=71.06, samples=19 00:27:06.143 iops : min= 412, max= 480, avg=466.89, stdev=17.89, samples=19 00:27:06.143 lat (msec) : 10=0.11%, 20=0.15%, 50=98.59%, 100=1.15% 00:27:06.143 cpu : usr=97.94%, sys=1.63%, ctx=14, majf=0, minf=9 00:27:06.143 IO depths : 1=1.3%, 2=4.1%, 4=20.3%, 8=62.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:27:06.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.143 filename2: (groupid=0, jobs=1): err= 0: pid=2763749: Tue Jul 16 01:03:39 2024 00:27:06.143 read: IOPS=468, BW=1875KiB/s (1920kB/s)(18.3MiB/10003msec) 00:27:06.143 slat (usec): min=8, max=125, avg=30.42, stdev=23.88 00:27:06.143 clat (usec): min=3755, max=92221, avg=33961.80, stdev=4272.25 00:27:06.143 lat (usec): min=3763, max=92290, avg=33992.22, stdev=4273.56 00:27:06.143 clat percentiles (usec): 00:27:06.143 | 1.00th=[24773], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:27:06.143 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:27:06.143 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:27:06.143 | 99.00th=[50594], 99.50th=[60031], 99.90th=[72877], 99.95th=[91751], 00:27:06.143 | 99.99th=[91751] 00:27:06.143 bw ( KiB/s): min= 1532, max= 1920, per=4.11%, avg=1861.26, stdev=90.50, samples=19 00:27:06.143 iops : min= 383, max= 480, avg=465.32, stdev=22.63, samples=19 00:27:06.143 lat (msec) : 4=0.21%, 10=0.34%, 20=0.06%, 50=98.36%, 100=1.02% 00:27:06.143 cpu : usr=98.14%, sys=1.44%, ctx=12, majf=0, minf=9 00:27:06.143 IO depths : 1=0.2%, 2=1.8%, 4=8.2%, 8=73.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:27:06.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 complete : 0=0.0%, 4=91.2%, 8=6.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.143 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:06.143 00:27:06.143 Run status group 0 (all jobs): 00:27:06.143 READ: bw=44.2MiB/s (46.3MB/s), 1869KiB/s-1899KiB/s (1914kB/s-1944kB/s), io=443MiB (464MB), run=10003-10021msec 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:06.143 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 bdev_null0 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 [2024-07-16 01:03:39.461140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 bdev_null1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.144 { 00:27:06.144 "params": { 00:27:06.144 "name": "Nvme$subsystem", 00:27:06.144 "trtype": "$TEST_TRANSPORT", 00:27:06.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.144 "adrfam": "ipv4", 00:27:06.144 "trsvcid": "$NVMF_PORT", 00:27:06.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.144 "hdgst": ${hdgst:-false}, 00:27:06.144 "ddgst": ${ddgst:-false} 00:27:06.144 }, 00:27:06.144 "method": "bdev_nvme_attach_controller" 00:27:06.144 } 00:27:06.144 EOF 00:27:06.144 )") 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.144 { 00:27:06.144 "params": { 00:27:06.144 "name": "Nvme$subsystem", 00:27:06.144 "trtype": "$TEST_TRANSPORT", 00:27:06.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.144 "adrfam": "ipv4", 00:27:06.144 "trsvcid": "$NVMF_PORT", 00:27:06.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.144 "hdgst": ${hdgst:-false}, 00:27:06.144 "ddgst": ${ddgst:-false} 00:27:06.144 }, 00:27:06.144 "method": "bdev_nvme_attach_controller" 00:27:06.144 } 00:27:06.144 EOF 00:27:06.144 )") 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:06.144 "params": { 00:27:06.144 "name": "Nvme0", 00:27:06.144 "trtype": "tcp", 00:27:06.144 "traddr": "10.0.0.2", 00:27:06.144 "adrfam": "ipv4", 00:27:06.144 "trsvcid": "4420", 00:27:06.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.144 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:06.144 "hdgst": false, 00:27:06.144 "ddgst": false 00:27:06.144 }, 00:27:06.144 "method": "bdev_nvme_attach_controller" 00:27:06.144 },{ 00:27:06.144 "params": { 00:27:06.144 "name": "Nvme1", 00:27:06.144 "trtype": "tcp", 00:27:06.144 "traddr": "10.0.0.2", 00:27:06.144 "adrfam": "ipv4", 00:27:06.144 "trsvcid": "4420", 00:27:06.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.144 "hdgst": false, 00:27:06.144 "ddgst": false 00:27:06.144 }, 00:27:06.144 "method": "bdev_nvme_attach_controller" 00:27:06.144 }' 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:06.144 01:03:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.144 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:06.144 ... 00:27:06.145 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:06.145 ... 00:27:06.145 fio-3.35 00:27:06.145 Starting 4 threads 00:27:06.145 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.428 00:27:11.428 filename0: (groupid=0, jobs=1): err= 0: pid=2765127: Tue Jul 16 01:03:45 2024 00:27:11.428 read: IOPS=1961, BW=15.3MiB/s (16.1MB/s)(76.6MiB/5001msec) 00:27:11.428 slat (nsec): min=3892, max=36830, avg=11634.90, stdev=3481.54 00:27:11.428 clat (usec): min=1413, max=8979, avg=4046.10, stdev=529.36 00:27:11.428 lat (usec): min=1428, max=8991, avg=4057.74, stdev=529.02 00:27:11.428 clat percentiles (usec): 00:27:11.428 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3785], 00:27:11.428 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:27:11.428 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4490], 95.00th=[ 5145], 00:27:11.428 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6849], 99.95th=[ 6980], 00:27:11.428 | 99.99th=[ 8979] 00:27:11.428 bw ( KiB/s): min=15328, max=15952, per=25.38%, avg=15722.67, stdev=195.14, samples=9 00:27:11.428 iops : min= 1916, max= 1994, avg=1965.33, stdev=24.39, samples=9 00:27:11.428 lat (msec) : 2=0.01%, 4=43.21%, 10=56.78% 00:27:11.428 cpu : usr=94.42%, sys=5.08%, ctx=10, majf=0, minf=9 00:27:11.428 IO depths : 1=0.1%, 2=1.2%, 4=67.4%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 issued rwts: total=9809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:11.428 filename0: (groupid=0, jobs=1): err= 0: pid=2765128: Tue Jul 16 01:03:45 2024 00:27:11.428 read: IOPS=1900, BW=14.8MiB/s (15.6MB/s)(74.3MiB/5002msec) 00:27:11.428 slat (nsec): min=3752, max=32035, avg=11317.25, stdev=3450.10 00:27:11.428 clat (usec): min=903, max=7601, avg=4175.41, stdev=683.96 00:27:11.428 lat (usec): min=916, max=7609, avg=4186.72, stdev=683.90 00:27:11.428 clat percentiles (usec): 00:27:11.428 | 1.00th=[ 3032], 5.00th=[ 3490], 10.00th=[ 3621], 20.00th=[ 3785], 00:27:11.428 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:27:11.428 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 5342], 95.00th=[ 5932], 00:27:11.428 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6849], 99.95th=[ 7373], 00:27:11.428 | 99.99th=[ 7570] 00:27:11.428 bw ( KiB/s): min=14752, max=15792, per=24.54%, avg=15206.10, stdev=319.28, samples=10 00:27:11.428 iops : min= 1844, max= 1974, avg=1900.70, stdev=39.98, samples=10 00:27:11.428 lat (usec) : 1000=0.01% 00:27:11.428 lat (msec) : 4=44.32%, 10=55.67% 00:27:11.428 cpu : usr=92.80%, sys=6.32%, ctx=178, majf=0, minf=0 00:27:11.428 IO depths : 1=0.1%, 2=1.6%, 4=70.4%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 issued rwts: total=9505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:11.428 filename1: (groupid=0, jobs=1): err= 0: pid=2765129: Tue Jul 16 01:03:45 2024 00:27:11.428 read: IOPS=2016, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5002msec) 00:27:11.428 slat (nsec): min=3865, max=38051, avg=12019.19, stdev=4392.08 00:27:11.428 clat (usec): min=1382, max=6927, avg=3928.46, stdev=521.12 00:27:11.428 lat (usec): min=1396, max=6942, avg=3940.48, stdev=521.23 00:27:11.428 clat percentiles (usec): 00:27:11.428 | 1.00th=[ 2606], 5.00th=[ 3064], 10.00th=[ 3326], 20.00th=[ 3654], 00:27:11.428 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4015], 00:27:11.428 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4359], 95.00th=[ 4752], 00:27:11.428 | 99.00th=[ 5866], 99.50th=[ 6128], 99.90th=[ 6652], 99.95th=[ 6915], 00:27:11.428 | 99.99th=[ 6915] 00:27:11.428 bw ( KiB/s): min=15856, max=16384, per=25.96%, avg=16083.56, stdev=177.05, samples=9 00:27:11.428 iops : min= 1982, max= 2048, avg=2010.44, stdev=22.13, samples=9 00:27:11.428 lat (msec) : 2=0.09%, 4=53.00%, 10=46.91% 00:27:11.428 cpu : usr=88.32%, sys=8.68%, ctx=252, majf=0, minf=0 00:27:11.428 IO depths : 1=0.1%, 2=5.8%, 4=66.9%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 issued rwts: total=10085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:11.428 filename1: (groupid=0, jobs=1): err= 0: pid=2765130: Tue Jul 16 01:03:45 2024 00:27:11.428 read: IOPS=1913, BW=14.9MiB/s (15.7MB/s)(75.4MiB/5042msec) 00:27:11.428 slat (nsec): min=3916, max=92147, avg=10457.48, stdev=3481.11 00:27:11.428 clat (usec): min=1560, max=41631, avg=4125.34, stdev=865.22 00:27:11.428 lat (usec): min=1574, max=41646, avg=4135.80, stdev=865.21 00:27:11.428 clat percentiles (usec): 00:27:11.428 | 1.00th=[ 3064], 5.00th=[ 3490], 10.00th=[ 3654], 20.00th=[ 3785], 00:27:11.428 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4047], 00:27:11.428 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 5407], 00:27:11.428 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7046], 99.95th=[ 7046], 00:27:11.428 | 99.99th=[41681] 00:27:11.428 bw ( KiB/s): min=15024, max=15840, per=24.91%, avg=15430.40, stdev=260.21, samples=10 00:27:11.428 iops : min= 1878, max= 1980, avg=1928.80, stdev=32.53, samples=10 00:27:11.428 lat (msec) : 2=0.01%, 4=40.61%, 10=59.34%, 50=0.03% 00:27:11.428 cpu : usr=93.83%, sys=5.42%, ctx=12, majf=0, minf=9 00:27:11.428 IO depths : 1=0.2%, 2=5.1%, 4=67.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.428 issued rwts: total=9647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:11.428 00:27:11.428 Run status group 0 (all jobs): 00:27:11.428 READ: bw=60.5MiB/s (63.4MB/s), 14.8MiB/s-15.8MiB/s (15.6MB/s-16.5MB/s), io=305MiB (320MB), run=5001-5042msec 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.428 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.429 00:27:11.429 real 0m24.168s 00:27:11.429 user 4m29.839s 00:27:11.429 sys 0m7.797s 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 ************************************ 00:27:11.429 END TEST fio_dif_rand_params 00:27:11.429 ************************************ 00:27:11.429 01:03:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:11.429 01:03:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:11.429 01:03:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:11.429 01:03:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 ************************************ 00:27:11.429 START TEST fio_dif_digest 00:27:11.429 ************************************ 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 bdev_null0 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:11.429 [2024-07-16 01:03:45.960873] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.429 { 00:27:11.429 "params": { 00:27:11.429 "name": "Nvme$subsystem", 00:27:11.429 "trtype": "$TEST_TRANSPORT", 00:27:11.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.429 "adrfam": "ipv4", 00:27:11.429 "trsvcid": "$NVMF_PORT", 00:27:11.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.429 "hdgst": ${hdgst:-false}, 00:27:11.429 "ddgst": ${ddgst:-false} 00:27:11.429 }, 00:27:11.429 "method": "bdev_nvme_attach_controller" 00:27:11.429 } 00:27:11.429 EOF 00:27:11.429 )") 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:11.429 "params": { 00:27:11.429 "name": "Nvme0", 00:27:11.429 "trtype": "tcp", 00:27:11.429 "traddr": "10.0.0.2", 00:27:11.429 "adrfam": "ipv4", 00:27:11.429 "trsvcid": "4420", 00:27:11.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:11.429 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:11.429 "hdgst": true, 00:27:11.429 "ddgst": true 00:27:11.429 }, 00:27:11.429 "method": "bdev_nvme_attach_controller" 00:27:11.429 }' 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:11.429 01:03:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:11.429 01:03:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:11.429 01:03:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:11.429 01:03:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:11.429 01:03:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.687 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:11.687 ... 00:27:11.687 fio-3.35 00:27:11.687 Starting 3 threads 00:27:11.687 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.883 00:27:23.883 filename0: (groupid=0, jobs=1): err= 0: pid=2765882: Tue Jul 16 01:03:56 2024 00:27:23.883 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10044msec) 00:27:23.883 slat (nsec): min=4446, max=34767, avg=13063.30, stdev=2319.66 00:27:23.883 clat (usec): min=6233, max=59042, avg=15704.63, stdev=6880.02 00:27:23.883 lat (usec): min=6244, max=59055, avg=15717.69, stdev=6880.10 00:27:23.883 clat percentiles (usec): 00:27:23.883 | 1.00th=[ 8094], 5.00th=[10683], 10.00th=[12256], 20.00th=[13698], 00:27:23.883 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:27:23.883 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16712], 95.00th=[17433], 00:27:23.883 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58459], 99.95th=[58983], 00:27:23.883 | 99.99th=[58983] 00:27:23.883 bw ( KiB/s): min=19712, max=29184, per=33.35%, avg=24473.60, stdev=2597.98, samples=20 00:27:23.883 iops : min= 154, max= 228, avg=191.20, stdev=20.30, samples=20 00:27:23.883 lat (msec) : 10=2.61%, 20=94.78%, 100=2.61% 00:27:23.883 cpu : usr=92.36%, sys=7.17%, ctx=15, majf=0, minf=82 00:27:23.883 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.883 issued rwts: total=1914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.883 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:23.883 filename0: (groupid=0, jobs=1): err= 0: pid=2765883: Tue Jul 16 01:03:56 2024 00:27:23.883 read: IOPS=194, BW=24.2MiB/s (25.4MB/s)(244MiB/10046msec) 00:27:23.883 slat (nsec): min=7284, max=66840, avg=13557.42, stdev=2418.85 00:27:23.883 clat (usec): min=6664, max=57619, avg=15423.93, stdev=5916.78 00:27:23.883 lat (usec): min=6676, max=57632, avg=15437.49, stdev=5916.76 00:27:23.883 clat percentiles (usec): 00:27:23.883 | 1.00th=[ 7570], 5.00th=[10552], 10.00th=[11863], 20.00th=[13698], 00:27:23.883 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15008], 60.00th=[15401], 00:27:23.883 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16712], 95.00th=[17433], 00:27:23.883 | 99.00th=[55313], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:27:23.883 | 99.99th=[57410] 00:27:23.883 bw ( KiB/s): min=21248, max=29952, per=33.96%, avg=24921.60, stdev=1896.32, samples=20 00:27:23.883 iops : min= 166, max= 234, avg=194.70, stdev=14.81, samples=20 00:27:23.883 lat (msec) : 10=2.57%, 20=95.43%, 50=0.10%, 100=1.90% 00:27:23.883 cpu : usr=91.73%, sys=7.78%, ctx=26, majf=0, minf=184 00:27:23.883 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.883 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.883 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:23.883 filename0: (groupid=0, jobs=1): err= 0: pid=2765884: Tue Jul 16 01:03:56 2024 00:27:23.883 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(237MiB/10045msec) 00:27:23.883 slat (nsec): min=4517, max=44770, avg=13281.23, stdev=2304.47 00:27:23.883 clat (usec): min=7569, max=59955, avg=15854.16, stdev=5866.94 00:27:23.883 lat (usec): min=7581, max=59968, avg=15867.44, stdev=5866.97 00:27:23.883 clat percentiles (usec): 00:27:23.883 | 1.00th=[ 8291], 5.00th=[10945], 10.00th=[12387], 20.00th=[14091], 00:27:23.883 | 30.00th=[14615], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 00:27:23.883 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17433], 95.00th=[17957], 00:27:23.883 | 99.00th=[56361], 99.50th=[57410], 99.90th=[59507], 99.95th=[60031], 00:27:23.883 | 99.99th=[60031] 00:27:23.883 bw ( KiB/s): min=18432, max=28160, per=33.04%, avg=24243.20, stdev=2418.81, samples=20 00:27:23.883 iops : min= 144, max= 220, avg=189.40, stdev=18.90, samples=20 00:27:23.883 lat (msec) : 10=1.74%, 20=96.26%, 50=0.21%, 100=1.79% 00:27:23.883 cpu : usr=91.99%, sys=7.53%, ctx=19, majf=0, minf=184 00:27:23.883 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.883 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.883 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:23.883 00:27:23.883 Run status group 0 (all jobs): 00:27:23.883 READ: bw=71.7MiB/s (75.1MB/s), 23.6MiB/s-24.2MiB/s (24.7MB/s-25.4MB/s), io=720MiB (755MB), run=10044-10046msec 00:27:23.883 01:03:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:23.883 01:03:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:23.883 01:03:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:23.883 01:03:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.884 00:27:23.884 real 0m11.303s 00:27:23.884 user 0m29.059s 00:27:23.884 sys 0m2.522s 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:23.884 01:03:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:23.884 ************************************ 00:27:23.884 END TEST fio_dif_digest 00:27:23.884 ************************************ 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:23.884 01:03:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:23.884 01:03:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.884 rmmod nvme_tcp 00:27:23.884 rmmod nvme_fabrics 00:27:23.884 rmmod nvme_keyring 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2759228 ']' 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2759228 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2759228 ']' 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2759228 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2759228 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2759228' 00:27:23.884 killing process with pid 2759228 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2759228 00:27:23.884 01:03:57 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2759228 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:23.884 01:03:57 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:24.141 Waiting for block devices as requested 00:27:24.141 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:24.141 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:24.398 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.398 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.398 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.398 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.661 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.662 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.662 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:24.662 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:24.924 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.924 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.924 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.924 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:25.182 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:25.182 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:25.182 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:25.442 01:03:59 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.442 01:03:59 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.442 01:03:59 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.442 01:03:59 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.442 01:03:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.442 01:03:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:25.442 01:03:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.347 01:04:01 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.347 00:27:27.347 real 1m6.911s 00:27:27.347 user 6m26.286s 00:27:27.347 sys 0m19.885s 00:27:27.347 01:04:01 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.347 01:04:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:27.347 ************************************ 00:27:27.347 END TEST nvmf_dif 00:27:27.347 ************************************ 00:27:27.347 01:04:02 -- common/autotest_common.sh@1142 -- # return 0 00:27:27.347 01:04:02 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:27.347 01:04:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:27.347 01:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.347 01:04:02 -- common/autotest_common.sh@10 -- # set +x 00:27:27.347 ************************************ 00:27:27.347 START TEST nvmf_abort_qd_sizes 00:27:27.347 ************************************ 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:27.347 * Looking for test storage... 00:27:27.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.347 01:04:02 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.607 01:04:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.511 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:29.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:29.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:29.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:29.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:27:29.512 00:27:29.512 --- 10.0.0.2 ping statistics --- 00:27:29.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.512 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:27:29.512 00:27:29.512 --- 10.0.0.1 ping statistics --- 00:27:29.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.512 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:29.512 01:04:04 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:30.885 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:30.885 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:30.885 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:30.885 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:30.885 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:30.885 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:30.885 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:30.885 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:30.885 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:30.886 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:30.886 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:30.886 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:30.886 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:30.886 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:30.886 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:30.886 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:31.820 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2770786 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2770786 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2770786 ']' 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.820 01:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:31.820 [2024-07-16 01:04:06.476634] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:27:31.820 [2024-07-16 01:04:06.476721] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.820 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.820 [2024-07-16 01:04:06.543329] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.079 [2024-07-16 01:04:06.665908] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.079 [2024-07-16 01:04:06.665963] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.079 [2024-07-16 01:04:06.665991] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.079 [2024-07-16 01:04:06.666004] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.079 [2024-07-16 01:04:06.666015] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.079 [2024-07-16 01:04:06.669903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.079 [2024-07-16 01:04:06.669979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.079 [2024-07-16 01:04:06.670093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.079 [2024-07-16 01:04:06.670097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.015 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:33.015 ************************************ 00:27:33.015 START TEST spdk_target_abort 00:27:33.015 ************************************ 00:27:33.015 01:04:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:33.015 01:04:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:33.015 01:04:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:27:33.015 01:04:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.015 01:04:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.613 spdk_targetn1 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.613 [2024-07-16 01:04:10.314873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.613 [2024-07-16 01:04:10.347414] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.613 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:35.614 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:35.614 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:35.614 01:04:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:35.871 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.155 Initializing NVMe Controllers 00:27:39.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:39.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:39.155 Initialization complete. Launching workers. 00:27:39.155 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9473, failed: 0 00:27:39.155 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1211, failed to submit 8262 00:27:39.155 success 844, unsuccess 367, failed 0 00:27:39.155 01:04:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:39.155 01:04:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:39.155 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.445 Initializing NVMe Controllers 00:27:42.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:42.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:42.445 Initialization complete. Launching workers. 00:27:42.445 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8640, failed: 0 00:27:42.445 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7386 00:27:42.445 success 311, unsuccess 943, failed 0 00:27:42.445 01:04:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:42.445 01:04:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:42.445 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.730 Initializing NVMe Controllers 00:27:45.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:45.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:45.730 Initialization complete. Launching workers. 00:27:45.730 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31526, failed: 0 00:27:45.730 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2731, failed to submit 28795 00:27:45.730 success 525, unsuccess 2206, failed 0 00:27:45.730 01:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:45.730 01:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.730 01:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.730 01:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.730 01:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:45.730 01:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.730 01:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2770786 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2770786 ']' 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2770786 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2770786 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2770786' 00:27:46.663 killing process with pid 2770786 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2770786 00:27:46.663 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2770786 00:27:46.931 00:27:46.931 real 0m14.190s 00:27:46.931 user 0m55.582s 00:27:46.931 sys 0m2.799s 00:27:46.931 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.931 01:04:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.931 ************************************ 00:27:46.931 END TEST spdk_target_abort 00:27:46.931 ************************************ 00:27:46.931 01:04:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:46.931 01:04:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:46.931 01:04:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:46.931 01:04:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.931 01:04:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:47.191 ************************************ 00:27:47.191 START TEST kernel_target_abort 00:27:47.191 ************************************ 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:47.191 01:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:48.127 Waiting for block devices as requested 00:27:48.127 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:48.386 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:48.386 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:48.644 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:48.644 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:48.644 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:48.644 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:48.901 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:48.901 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:48.901 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:48.901 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:49.158 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:49.158 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:49.158 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:49.416 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:49.416 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:49.416 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:49.673 No valid GPT data, bailing 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:49.673 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:49.674 00:27:49.674 Discovery Log Number of Records 2, Generation counter 2 00:27:49.674 =====Discovery Log Entry 0====== 00:27:49.674 trtype: tcp 00:27:49.674 adrfam: ipv4 00:27:49.674 subtype: current discovery subsystem 00:27:49.674 treq: not specified, sq flow control disable supported 00:27:49.674 portid: 1 00:27:49.674 trsvcid: 4420 00:27:49.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:49.674 traddr: 10.0.0.1 00:27:49.674 eflags: none 00:27:49.674 sectype: none 00:27:49.674 =====Discovery Log Entry 1====== 00:27:49.674 trtype: tcp 00:27:49.674 adrfam: ipv4 00:27:49.674 subtype: nvme subsystem 00:27:49.674 treq: not specified, sq flow control disable supported 00:27:49.674 portid: 1 00:27:49.674 trsvcid: 4420 00:27:49.674 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:49.674 traddr: 10.0.0.1 00:27:49.674 eflags: none 00:27:49.674 sectype: none 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:49.674 01:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:49.674 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.955 Initializing NVMe Controllers 00:27:52.955 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:52.955 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:52.955 Initialization complete. Launching workers. 00:27:52.955 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27865, failed: 0 00:27:52.955 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27865, failed to submit 0 00:27:52.955 success 0, unsuccess 27865, failed 0 00:27:52.955 01:04:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:52.955 01:04:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:52.955 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.319 Initializing NVMe Controllers 00:27:56.319 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:56.319 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:56.319 Initialization complete. Launching workers. 00:27:56.319 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56940, failed: 0 00:27:56.319 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14334, failed to submit 42606 00:27:56.319 success 0, unsuccess 14334, failed 0 00:27:56.319 01:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:56.319 01:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:56.319 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.604 Initializing NVMe Controllers 00:27:59.604 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:59.604 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:59.604 Initialization complete. Launching workers. 00:27:59.604 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55712, failed: 0 00:27:59.604 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13902, failed to submit 41810 00:27:59.604 success 0, unsuccess 13902, failed 0 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:59.604 01:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:00.173 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:00.173 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:00.173 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:00.173 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:00.173 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:00.173 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:00.173 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:00.173 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:00.173 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:01.111 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:01.370 00:28:01.370 real 0m14.213s 00:28:01.370 user 0m4.519s 00:28:01.370 sys 0m3.431s 00:28:01.370 01:04:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:01.370 01:04:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.370 ************************************ 00:28:01.370 END TEST kernel_target_abort 00:28:01.370 ************************************ 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.370 rmmod nvme_tcp 00:28:01.370 rmmod nvme_fabrics 00:28:01.370 rmmod nvme_keyring 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2770786 ']' 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2770786 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2770786 ']' 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2770786 00:28:01.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2770786) - No such process 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2770786 is not found' 00:28:01.370 Process with pid 2770786 is not found 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:01.370 01:04:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:02.302 Waiting for block devices as requested 00:28:02.302 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:02.560 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:02.560 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:02.819 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:02.819 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:02.819 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:02.819 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:03.078 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:03.078 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:03.078 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:03.078 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:03.337 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:03.337 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:03.337 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:03.337 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:03.595 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:03.595 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:03.854 01:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.854 01:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.854 01:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.854 01:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.854 01:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.854 01:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:03.854 01:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.755 01:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:05.755 00:28:05.755 real 0m38.364s 00:28:05.755 user 1m2.404s 00:28:05.755 sys 0m9.494s 00:28:05.755 01:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.755 01:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:05.755 ************************************ 00:28:05.755 END TEST nvmf_abort_qd_sizes 00:28:05.755 ************************************ 00:28:05.755 01:04:40 -- common/autotest_common.sh@1142 -- # return 0 00:28:05.755 01:04:40 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:05.755 01:04:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:05.755 01:04:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.755 01:04:40 -- common/autotest_common.sh@10 -- # set +x 00:28:05.755 ************************************ 00:28:05.755 START TEST keyring_file 00:28:05.755 ************************************ 00:28:05.755 01:04:40 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:05.755 * Looking for test storage... 00:28:05.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:05.755 01:04:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:05.755 01:04:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.755 01:04:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.012 01:04:40 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.012 01:04:40 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.012 01:04:40 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.012 01:04:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.012 01:04:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.012 01:04:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.012 01:04:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:06.012 01:04:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.012 01:04:40 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.012 01:04:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:06.012 01:04:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pQIP0MWFGp 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pQIP0MWFGp 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pQIP0MWFGp 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pQIP0MWFGp 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kAmqgaL2MV 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:06.013 01:04:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kAmqgaL2MV 00:28:06.013 01:04:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kAmqgaL2MV 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kAmqgaL2MV 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=2776675 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:06.013 01:04:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2776675 00:28:06.013 01:04:40 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2776675 ']' 00:28:06.013 01:04:40 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.013 01:04:40 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.013 01:04:40 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.013 01:04:40 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.013 01:04:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:06.013 [2024-07-16 01:04:40.667414] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:28:06.013 [2024-07-16 01:04:40.667512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776675 ] 00:28:06.013 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.013 [2024-07-16 01:04:40.732763] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.271 [2024-07-16 01:04:40.856065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:07.208 01:04:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:07.208 [2024-07-16 01:04:41.612005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.208 null0 00:28:07.208 [2024-07-16 01:04:41.644034] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:07.208 [2024-07-16 01:04:41.644483] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:07.208 [2024-07-16 01:04:41.652036] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.208 01:04:41 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:07.208 [2024-07-16 01:04:41.660060] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:07.208 request: 00:28:07.208 { 00:28:07.208 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.208 "secure_channel": false, 00:28:07.208 "listen_address": { 00:28:07.208 "trtype": "tcp", 00:28:07.208 "traddr": "127.0.0.1", 00:28:07.208 "trsvcid": "4420" 00:28:07.208 }, 00:28:07.208 "method": "nvmf_subsystem_add_listener", 00:28:07.208 "req_id": 1 00:28:07.208 } 00:28:07.208 Got JSON-RPC error response 00:28:07.208 response: 00:28:07.208 { 00:28:07.208 "code": -32602, 00:28:07.208 "message": "Invalid parameters" 00:28:07.208 } 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:07.208 01:04:41 keyring_file -- keyring/file.sh@46 -- # bperfpid=2776811 00:28:07.208 01:04:41 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:07.208 01:04:41 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2776811 /var/tmp/bperf.sock 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2776811 ']' 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:07.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.208 01:04:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:07.208 [2024-07-16 01:04:41.708319] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:28:07.208 [2024-07-16 01:04:41.708397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776811 ] 00:28:07.208 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.208 [2024-07-16 01:04:41.768266] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.208 [2024-07-16 01:04:41.884577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.467 01:04:42 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.467 01:04:42 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:07.467 01:04:42 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:07.467 01:04:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:07.725 01:04:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kAmqgaL2MV 00:28:07.725 01:04:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kAmqgaL2MV 00:28:07.983 01:04:42 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:07.983 01:04:42 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:07.983 01:04:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:07.983 01:04:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.983 01:04:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.240 01:04:42 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.pQIP0MWFGp == \/\t\m\p\/\t\m\p\.\p\Q\I\P\0\M\W\F\G\p ]] 00:28:08.241 01:04:42 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:08.241 01:04:42 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:08.241 01:04:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.241 01:04:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.241 01:04:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:08.498 01:04:42 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kAmqgaL2MV == \/\t\m\p\/\t\m\p\.\k\A\m\q\g\a\L\2\M\V ]] 00:28:08.498 01:04:42 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.498 01:04:43 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:08.498 01:04:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:08.498 01:04:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.755 01:04:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:08.755 01:04:43 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:08.755 01:04:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:09.013 [2024-07-16 01:04:43.716575] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:09.271 nvme0n1 00:28:09.271 01:04:43 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:09.271 01:04:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:09.271 01:04:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.271 01:04:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.271 01:04:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:09.271 01:04:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.528 01:04:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:09.528 01:04:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:09.528 01:04:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:09.528 01:04:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.528 01:04:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.528 01:04:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:09.528 01:04:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.786 01:04:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:09.786 01:04:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.786 Running I/O for 1 seconds... 00:28:10.719 00:28:10.719 Latency(us) 00:28:10.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.719 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:10.719 nvme0n1 : 1.02 4210.51 16.45 0.00 0.00 30143.45 9903.22 42525.58 00:28:10.719 =================================================================================================================== 00:28:10.719 Total : 4210.51 16.45 0.00 0.00 30143.45 9903.22 42525.58 00:28:10.719 0 00:28:10.719 01:04:45 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:10.719 01:04:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:10.977 01:04:45 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:10.977 01:04:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:10.977 01:04:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:10.977 01:04:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:10.977 01:04:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:10.977 01:04:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.263 01:04:45 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:11.263 01:04:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:11.263 01:04:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:11.263 01:04:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:11.264 01:04:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:11.264 01:04:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:11.264 01:04:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.521 01:04:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:11.521 01:04:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:11.521 01:04:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:11.521 01:04:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:11.521 01:04:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:11.521 01:04:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.521 01:04:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:11.521 01:04:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.521 01:04:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:11.521 01:04:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:11.779 [2024-07-16 01:04:46.471238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:11.779 [2024-07-16 01:04:46.472086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67d430 (107): Transport endpoint is not connected 00:28:11.779 [2024-07-16 01:04:46.473078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67d430 (9): Bad file descriptor 00:28:11.779 [2024-07-16 01:04:46.474078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:11.779 [2024-07-16 01:04:46.474098] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:11.779 [2024-07-16 01:04:46.474112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:11.779 request: 00:28:11.779 { 00:28:11.779 "name": "nvme0", 00:28:11.779 "trtype": "tcp", 00:28:11.779 "traddr": "127.0.0.1", 00:28:11.779 "adrfam": "ipv4", 00:28:11.779 "trsvcid": "4420", 00:28:11.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:11.779 "prchk_reftag": false, 00:28:11.779 "prchk_guard": false, 00:28:11.779 "hdgst": false, 00:28:11.779 "ddgst": false, 00:28:11.779 "psk": "key1", 00:28:11.779 "method": "bdev_nvme_attach_controller", 00:28:11.779 "req_id": 1 00:28:11.779 } 00:28:11.779 Got JSON-RPC error response 00:28:11.779 response: 00:28:11.779 { 00:28:11.779 "code": -5, 00:28:11.779 "message": "Input/output error" 00:28:11.779 } 00:28:11.779 01:04:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:11.779 01:04:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:11.779 01:04:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:11.779 01:04:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:11.779 01:04:46 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:11.779 01:04:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:11.779 01:04:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:11.779 01:04:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:11.779 01:04:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:11.779 01:04:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.036 01:04:46 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:12.036 01:04:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:12.036 01:04:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:12.036 01:04:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.036 01:04:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.036 01:04:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.036 01:04:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:12.294 01:04:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:12.294 01:04:46 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:12.294 01:04:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:12.551 01:04:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:12.552 01:04:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:12.810 01:04:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:12.810 01:04:47 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:12.810 01:04:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.067 01:04:47 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:13.068 01:04:47 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.pQIP0MWFGp 00:28:13.068 01:04:47 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:13.068 01:04:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:13.068 01:04:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:13.068 01:04:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:13.068 01:04:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:13.068 01:04:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:13.068 01:04:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:13.068 01:04:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:13.068 01:04:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:13.325 [2024-07-16 01:04:47.972612] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pQIP0MWFGp': 0100660 00:28:13.325 [2024-07-16 01:04:47.972651] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:13.325 request: 00:28:13.325 { 00:28:13.325 "name": "key0", 00:28:13.325 "path": "/tmp/tmp.pQIP0MWFGp", 00:28:13.325 "method": "keyring_file_add_key", 00:28:13.325 "req_id": 1 00:28:13.325 } 00:28:13.325 Got JSON-RPC error response 00:28:13.325 response: 00:28:13.325 { 00:28:13.325 "code": -1, 00:28:13.325 "message": "Operation not permitted" 00:28:13.325 } 00:28:13.325 01:04:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:13.325 01:04:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:13.325 01:04:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:13.325 01:04:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:13.325 01:04:47 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.pQIP0MWFGp 00:28:13.325 01:04:47 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:13.325 01:04:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pQIP0MWFGp 00:28:13.584 01:04:48 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.pQIP0MWFGp 00:28:13.584 01:04:48 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:13.584 01:04:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:13.584 01:04:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:13.584 01:04:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:13.584 01:04:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:13.584 01:04:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.842 01:04:48 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:13.842 01:04:48 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:13.842 01:04:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:13.842 01:04:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:13.842 01:04:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:13.842 01:04:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:13.842 01:04:48 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:13.842 01:04:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:13.842 01:04:48 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:13.842 01:04:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:14.100 [2024-07-16 01:04:48.738713] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pQIP0MWFGp': No such file or directory 00:28:14.100 [2024-07-16 01:04:48.738752] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:14.100 [2024-07-16 01:04:48.738784] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:14.100 [2024-07-16 01:04:48.738797] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:14.100 [2024-07-16 01:04:48.738810] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:14.100 request: 00:28:14.100 { 00:28:14.100 "name": "nvme0", 00:28:14.100 "trtype": "tcp", 00:28:14.100 "traddr": "127.0.0.1", 00:28:14.100 "adrfam": "ipv4", 00:28:14.100 "trsvcid": "4420", 00:28:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.100 "prchk_reftag": false, 00:28:14.100 "prchk_guard": false, 00:28:14.100 "hdgst": false, 00:28:14.100 "ddgst": false, 00:28:14.100 "psk": "key0", 00:28:14.100 "method": "bdev_nvme_attach_controller", 00:28:14.100 "req_id": 1 00:28:14.100 } 00:28:14.100 Got JSON-RPC error response 00:28:14.100 response: 00:28:14.100 { 00:28:14.100 "code": -19, 00:28:14.100 "message": "No such device" 00:28:14.100 } 00:28:14.100 01:04:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:14.100 01:04:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:14.100 01:04:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:14.100 01:04:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:14.100 01:04:48 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:14.100 01:04:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:14.360 01:04:49 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5kD1A2MEhO 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:14.360 01:04:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:14.360 01:04:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:14.360 01:04:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:14.360 01:04:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:14.360 01:04:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:14.360 01:04:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5kD1A2MEhO 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5kD1A2MEhO 00:28:14.360 01:04:49 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.5kD1A2MEhO 00:28:14.360 01:04:49 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5kD1A2MEhO 00:28:14.360 01:04:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5kD1A2MEhO 00:28:14.616 01:04:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:14.616 01:04:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:14.873 nvme0n1 00:28:14.873 01:04:49 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:14.873 01:04:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:14.873 01:04:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:14.873 01:04:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:14.873 01:04:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:14.873 01:04:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:15.131 01:04:49 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:15.131 01:04:49 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:15.131 01:04:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:15.388 01:04:50 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:15.388 01:04:50 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:15.388 01:04:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:15.388 01:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.388 01:04:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:15.646 01:04:50 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:15.646 01:04:50 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:15.646 01:04:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:15.646 01:04:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:15.646 01:04:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:15.646 01:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.646 01:04:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:15.904 01:04:50 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:15.904 01:04:50 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:15.904 01:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:16.161 01:04:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:16.161 01:04:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:16.161 01:04:50 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:16.419 01:04:51 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:16.419 01:04:51 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5kD1A2MEhO 00:28:16.419 01:04:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5kD1A2MEhO 00:28:16.676 01:04:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kAmqgaL2MV 00:28:16.676 01:04:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kAmqgaL2MV 00:28:16.934 01:04:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:16.934 01:04:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:17.191 nvme0n1 00:28:17.191 01:04:51 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:17.191 01:04:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:17.759 01:04:52 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:17.759 "subsystems": [ 00:28:17.759 { 00:28:17.759 "subsystem": "keyring", 00:28:17.759 "config": [ 00:28:17.759 { 00:28:17.759 "method": "keyring_file_add_key", 00:28:17.759 "params": { 00:28:17.759 "name": "key0", 00:28:17.759 "path": "/tmp/tmp.5kD1A2MEhO" 00:28:17.759 } 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "method": "keyring_file_add_key", 00:28:17.759 "params": { 00:28:17.759 "name": "key1", 00:28:17.759 "path": "/tmp/tmp.kAmqgaL2MV" 00:28:17.759 } 00:28:17.759 } 00:28:17.759 ] 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "subsystem": "iobuf", 00:28:17.759 "config": [ 00:28:17.759 { 00:28:17.759 "method": "iobuf_set_options", 00:28:17.759 "params": { 00:28:17.759 "small_pool_count": 8192, 00:28:17.759 "large_pool_count": 1024, 00:28:17.759 "small_bufsize": 8192, 00:28:17.759 "large_bufsize": 135168 00:28:17.759 } 00:28:17.759 } 00:28:17.759 ] 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "subsystem": "sock", 00:28:17.759 "config": [ 00:28:17.759 { 00:28:17.759 "method": "sock_set_default_impl", 00:28:17.759 "params": { 00:28:17.759 "impl_name": "posix" 00:28:17.759 } 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "method": "sock_impl_set_options", 00:28:17.759 "params": { 00:28:17.759 "impl_name": "ssl", 00:28:17.759 "recv_buf_size": 4096, 00:28:17.759 "send_buf_size": 4096, 00:28:17.759 "enable_recv_pipe": true, 00:28:17.759 "enable_quickack": false, 00:28:17.759 "enable_placement_id": 0, 00:28:17.759 "enable_zerocopy_send_server": true, 00:28:17.759 "enable_zerocopy_send_client": false, 00:28:17.759 "zerocopy_threshold": 0, 00:28:17.759 "tls_version": 0, 00:28:17.759 "enable_ktls": false 00:28:17.759 } 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "method": "sock_impl_set_options", 00:28:17.759 "params": { 00:28:17.759 "impl_name": "posix", 00:28:17.759 "recv_buf_size": 2097152, 00:28:17.759 "send_buf_size": 2097152, 00:28:17.759 "enable_recv_pipe": true, 00:28:17.759 "enable_quickack": false, 00:28:17.759 "enable_placement_id": 0, 00:28:17.759 "enable_zerocopy_send_server": true, 00:28:17.759 "enable_zerocopy_send_client": false, 00:28:17.759 "zerocopy_threshold": 0, 00:28:17.759 "tls_version": 0, 00:28:17.759 "enable_ktls": false 00:28:17.759 } 00:28:17.759 } 00:28:17.759 ] 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "subsystem": "vmd", 00:28:17.759 "config": [] 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "subsystem": "accel", 00:28:17.759 "config": [ 00:28:17.759 { 00:28:17.759 "method": "accel_set_options", 00:28:17.759 "params": { 00:28:17.759 "small_cache_size": 128, 00:28:17.759 "large_cache_size": 16, 00:28:17.759 "task_count": 2048, 00:28:17.759 "sequence_count": 2048, 00:28:17.759 "buf_count": 2048 00:28:17.759 } 00:28:17.759 } 00:28:17.759 ] 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "subsystem": "bdev", 00:28:17.759 "config": [ 00:28:17.759 { 00:28:17.759 "method": "bdev_set_options", 00:28:17.759 "params": { 00:28:17.759 "bdev_io_pool_size": 65535, 00:28:17.759 "bdev_io_cache_size": 256, 00:28:17.759 "bdev_auto_examine": true, 00:28:17.759 "iobuf_small_cache_size": 128, 00:28:17.759 "iobuf_large_cache_size": 16 00:28:17.759 } 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "method": "bdev_raid_set_options", 00:28:17.759 "params": { 00:28:17.759 "process_window_size_kb": 1024 00:28:17.759 } 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "method": "bdev_iscsi_set_options", 00:28:17.759 "params": { 00:28:17.759 "timeout_sec": 30 00:28:17.759 } 00:28:17.759 }, 00:28:17.759 { 00:28:17.759 "method": "bdev_nvme_set_options", 00:28:17.759 "params": { 00:28:17.759 "action_on_timeout": "none", 00:28:17.759 "timeout_us": 0, 00:28:17.759 "timeout_admin_us": 0, 00:28:17.759 "keep_alive_timeout_ms": 10000, 00:28:17.759 "arbitration_burst": 0, 00:28:17.759 "low_priority_weight": 0, 00:28:17.759 "medium_priority_weight": 0, 00:28:17.759 "high_priority_weight": 0, 00:28:17.759 "nvme_adminq_poll_period_us": 10000, 00:28:17.759 "nvme_ioq_poll_period_us": 0, 00:28:17.759 "io_queue_requests": 512, 00:28:17.759 "delay_cmd_submit": true, 00:28:17.759 "transport_retry_count": 4, 00:28:17.759 "bdev_retry_count": 3, 00:28:17.759 "transport_ack_timeout": 0, 00:28:17.759 "ctrlr_loss_timeout_sec": 0, 00:28:17.759 "reconnect_delay_sec": 0, 00:28:17.759 "fast_io_fail_timeout_sec": 0, 00:28:17.759 "disable_auto_failback": false, 00:28:17.759 "generate_uuids": false, 00:28:17.760 "transport_tos": 0, 00:28:17.760 "nvme_error_stat": false, 00:28:17.760 "rdma_srq_size": 0, 00:28:17.760 "io_path_stat": false, 00:28:17.760 "allow_accel_sequence": false, 00:28:17.760 "rdma_max_cq_size": 0, 00:28:17.760 "rdma_cm_event_timeout_ms": 0, 00:28:17.760 "dhchap_digests": [ 00:28:17.760 "sha256", 00:28:17.760 "sha384", 00:28:17.760 "sha512" 00:28:17.760 ], 00:28:17.760 "dhchap_dhgroups": [ 00:28:17.760 "null", 00:28:17.760 "ffdhe2048", 00:28:17.760 "ffdhe3072", 00:28:17.760 "ffdhe4096", 00:28:17.760 "ffdhe6144", 00:28:17.760 "ffdhe8192" 00:28:17.760 ] 00:28:17.760 } 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "method": "bdev_nvme_attach_controller", 00:28:17.760 "params": { 00:28:17.760 "name": "nvme0", 00:28:17.760 "trtype": "TCP", 00:28:17.760 "adrfam": "IPv4", 00:28:17.760 "traddr": "127.0.0.1", 00:28:17.760 "trsvcid": "4420", 00:28:17.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.760 "prchk_reftag": false, 00:28:17.760 "prchk_guard": false, 00:28:17.760 "ctrlr_loss_timeout_sec": 0, 00:28:17.760 "reconnect_delay_sec": 0, 00:28:17.760 "fast_io_fail_timeout_sec": 0, 00:28:17.760 "psk": "key0", 00:28:17.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:17.760 "hdgst": false, 00:28:17.760 "ddgst": false 00:28:17.760 } 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "method": "bdev_nvme_set_hotplug", 00:28:17.760 "params": { 00:28:17.760 "period_us": 100000, 00:28:17.760 "enable": false 00:28:17.760 } 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "method": "bdev_wait_for_examine" 00:28:17.760 } 00:28:17.760 ] 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "subsystem": "nbd", 00:28:17.760 "config": [] 00:28:17.760 } 00:28:17.760 ] 00:28:17.760 }' 00:28:17.760 01:04:52 keyring_file -- keyring/file.sh@114 -- # killprocess 2776811 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2776811 ']' 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2776811 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2776811 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2776811' 00:28:17.760 killing process with pid 2776811 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@967 -- # kill 2776811 00:28:17.760 Received shutdown signal, test time was about 1.000000 seconds 00:28:17.760 00:28:17.760 Latency(us) 00:28:17.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.760 =================================================================================================================== 00:28:17.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@972 -- # wait 2776811 00:28:17.760 01:04:52 keyring_file -- keyring/file.sh@117 -- # bperfpid=2778153 00:28:17.760 01:04:52 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2778153 /var/tmp/bperf.sock 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2778153 ']' 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.760 01:04:52 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.760 01:04:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:17.760 01:04:52 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:17.760 "subsystems": [ 00:28:17.760 { 00:28:17.760 "subsystem": "keyring", 00:28:17.760 "config": [ 00:28:17.760 { 00:28:17.760 "method": "keyring_file_add_key", 00:28:17.760 "params": { 00:28:17.760 "name": "key0", 00:28:17.760 "path": "/tmp/tmp.5kD1A2MEhO" 00:28:17.760 } 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "method": "keyring_file_add_key", 00:28:17.760 "params": { 00:28:17.760 "name": "key1", 00:28:17.760 "path": "/tmp/tmp.kAmqgaL2MV" 00:28:17.760 } 00:28:17.760 } 00:28:17.760 ] 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "subsystem": "iobuf", 00:28:17.760 "config": [ 00:28:17.760 { 00:28:17.760 "method": "iobuf_set_options", 00:28:17.760 "params": { 00:28:17.760 "small_pool_count": 8192, 00:28:17.760 "large_pool_count": 1024, 00:28:17.760 "small_bufsize": 8192, 00:28:17.760 "large_bufsize": 135168 00:28:17.760 } 00:28:17.760 } 00:28:17.760 ] 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "subsystem": "sock", 00:28:17.760 "config": [ 00:28:17.760 { 00:28:17.760 "method": "sock_set_default_impl", 00:28:17.760 "params": { 00:28:17.760 "impl_name": "posix" 00:28:17.760 } 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "method": "sock_impl_set_options", 00:28:17.760 "params": { 00:28:17.760 "impl_name": "ssl", 00:28:17.760 "recv_buf_size": 4096, 00:28:17.760 "send_buf_size": 4096, 00:28:17.760 "enable_recv_pipe": true, 00:28:17.760 "enable_quickack": false, 00:28:17.760 "enable_placement_id": 0, 00:28:17.760 "enable_zerocopy_send_server": true, 00:28:17.760 "enable_zerocopy_send_client": false, 00:28:17.760 "zerocopy_threshold": 0, 00:28:17.760 "tls_version": 0, 00:28:17.760 "enable_ktls": false 00:28:17.760 } 00:28:17.760 }, 00:28:17.760 { 00:28:17.760 "method": "sock_impl_set_options", 00:28:17.760 "params": { 00:28:17.760 "impl_name": "posix", 00:28:17.760 "recv_buf_size": 2097152, 00:28:17.760 "send_buf_size": 2097152, 00:28:17.760 "enable_recv_pipe": true, 00:28:17.760 "enable_quickack": false, 00:28:17.760 "enable_placement_id": 0, 00:28:17.760 "enable_zerocopy_send_server": true, 00:28:17.761 "enable_zerocopy_send_client": false, 00:28:17.761 "zerocopy_threshold": 0, 00:28:17.761 "tls_version": 0, 00:28:17.761 "enable_ktls": false 00:28:17.761 } 00:28:17.761 } 00:28:17.761 ] 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "subsystem": "vmd", 00:28:17.761 "config": [] 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "subsystem": "accel", 00:28:17.761 "config": [ 00:28:17.761 { 00:28:17.761 "method": "accel_set_options", 00:28:17.761 "params": { 00:28:17.761 "small_cache_size": 128, 00:28:17.761 "large_cache_size": 16, 00:28:17.761 "task_count": 2048, 00:28:17.761 "sequence_count": 2048, 00:28:17.761 "buf_count": 2048 00:28:17.761 } 00:28:17.761 } 00:28:17.761 ] 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "subsystem": "bdev", 00:28:17.761 "config": [ 00:28:17.761 { 00:28:17.761 "method": "bdev_set_options", 00:28:17.761 "params": { 00:28:17.761 "bdev_io_pool_size": 65535, 00:28:17.761 "bdev_io_cache_size": 256, 00:28:17.761 "bdev_auto_examine": true, 00:28:17.761 "iobuf_small_cache_size": 128, 00:28:17.761 "iobuf_large_cache_size": 16 00:28:17.761 } 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "method": "bdev_raid_set_options", 00:28:17.761 "params": { 00:28:17.761 "process_window_size_kb": 1024 00:28:17.761 } 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "method": "bdev_iscsi_set_options", 00:28:17.761 "params": { 00:28:17.761 "timeout_sec": 30 00:28:17.761 } 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "method": "bdev_nvme_set_options", 00:28:17.761 "params": { 00:28:17.761 "action_on_timeout": "none", 00:28:17.761 "timeout_us": 0, 00:28:17.761 "timeout_admin_us": 0, 00:28:17.761 "keep_alive_timeout_ms": 10000, 00:28:17.761 "arbitration_burst": 0, 00:28:17.761 "low_priority_weight": 0, 00:28:17.761 "medium_priority_weight": 0, 00:28:17.761 "high_priority_weight": 0, 00:28:17.761 "nvme_adminq_poll_period_us": 10000, 00:28:17.761 "nvme_ioq_poll_period_us": 0, 00:28:17.761 "io_queue_requests": 512, 00:28:17.761 "delay_cmd_submit": true, 00:28:17.761 "transport_retry_count": 4, 00:28:17.761 "bdev_retry_count": 3, 00:28:17.761 "transport_ack_timeout": 0, 00:28:17.761 "ctrlr_loss_timeout_sec": 0, 00:28:17.761 "reconnect_delay_sec": 0, 00:28:17.761 "fast_io_fail_timeout_sec": 0, 00:28:17.761 "disable_auto_failback": false, 00:28:17.761 "generate_uuids": false, 00:28:17.761 "transport_tos": 0, 00:28:17.761 "nvme_error_stat": false, 00:28:17.761 "rdma_srq_size": 0, 00:28:17.761 "io_path_stat": false, 00:28:17.761 "allow_accel_sequence": false, 00:28:17.761 "rdma_max_cq_size": 0, 00:28:17.761 "rdma_cm_event_timeout_ms": 0, 00:28:17.761 "dhchap_digests": [ 00:28:17.761 "sha256", 00:28:17.761 "sha384", 00:28:17.761 "sha512" 00:28:17.761 ], 00:28:17.761 "dhchap_dhgroups": [ 00:28:17.761 "null", 00:28:17.761 "ffdhe2048", 00:28:17.761 "ffdhe3072", 00:28:17.761 "ffdhe4096", 00:28:17.761 "ffdhe6144", 00:28:17.761 "ffdhe8192" 00:28:17.761 ] 00:28:17.761 } 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "method": "bdev_nvme_attach_controller", 00:28:17.761 "params": { 00:28:17.761 "name": "nvme0", 00:28:17.761 "trtype": "TCP", 00:28:17.761 "adrfam": "IPv4", 00:28:17.761 "traddr": "127.0.0.1", 00:28:17.761 "trsvcid": "4420", 00:28:17.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.761 "prchk_reftag": false, 00:28:17.761 "prchk_guard": false, 00:28:17.761 "ctrlr_loss_timeout_sec": 0, 00:28:17.761 "reconnect_delay_sec": 0, 00:28:17.761 "fast_io_fail_timeout_sec": 0, 00:28:17.761 "psk": "key0", 00:28:17.761 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:17.761 "hdgst": false, 00:28:17.761 "ddgst": false 00:28:17.761 } 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "method": "bdev_nvme_set_hotplug", 00:28:17.761 "params": { 00:28:17.761 "period_us": 100000, 00:28:17.761 "enable": false 00:28:17.761 } 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "method": "bdev_wait_for_examine" 00:28:17.761 } 00:28:17.761 ] 00:28:17.761 }, 00:28:17.761 { 00:28:17.761 "subsystem": "nbd", 00:28:17.761 "config": [] 00:28:17.761 } 00:28:17.761 ] 00:28:17.761 }' 00:28:18.019 [2024-07-16 01:04:52.553298] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:28:18.019 [2024-07-16 01:04:52.553369] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778153 ] 00:28:18.019 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.019 [2024-07-16 01:04:52.612207] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.019 [2024-07-16 01:04:52.719952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.277 [2024-07-16 01:04:52.907096] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:18.842 01:04:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:18.842 01:04:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:18.842 01:04:53 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:18.842 01:04:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:18.842 01:04:53 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:19.099 01:04:53 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:19.099 01:04:53 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:19.099 01:04:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:19.099 01:04:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:19.099 01:04:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:19.099 01:04:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:19.099 01:04:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:19.356 01:04:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:19.356 01:04:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:19.356 01:04:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:19.356 01:04:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:19.356 01:04:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:19.356 01:04:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:19.356 01:04:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:19.614 01:04:54 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:19.614 01:04:54 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:19.614 01:04:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:19.614 01:04:54 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:19.871 01:04:54 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:19.871 01:04:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:19.871 01:04:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5kD1A2MEhO /tmp/tmp.kAmqgaL2MV 00:28:19.871 01:04:54 keyring_file -- keyring/file.sh@20 -- # killprocess 2778153 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2778153 ']' 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2778153 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2778153 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2778153' 00:28:19.871 killing process with pid 2778153 00:28:19.871 01:04:54 keyring_file -- common/autotest_common.sh@967 -- # kill 2778153 00:28:19.872 Received shutdown signal, test time was about 1.000000 seconds 00:28:19.872 00:28:19.872 Latency(us) 00:28:19.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.872 =================================================================================================================== 00:28:19.872 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:19.872 01:04:54 keyring_file -- common/autotest_common.sh@972 -- # wait 2778153 00:28:20.129 01:04:54 keyring_file -- keyring/file.sh@21 -- # killprocess 2776675 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2776675 ']' 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2776675 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2776675 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2776675' 00:28:20.129 killing process with pid 2776675 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@967 -- # kill 2776675 00:28:20.129 [2024-07-16 01:04:54.822997] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:20.129 01:04:54 keyring_file -- common/autotest_common.sh@972 -- # wait 2776675 00:28:20.696 00:28:20.696 real 0m14.795s 00:28:20.696 user 0m35.787s 00:28:20.696 sys 0m3.289s 00:28:20.696 01:04:55 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:20.696 01:04:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:20.696 ************************************ 00:28:20.696 END TEST keyring_file 00:28:20.696 ************************************ 00:28:20.696 01:04:55 -- common/autotest_common.sh@1142 -- # return 0 00:28:20.696 01:04:55 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:20.696 01:04:55 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:20.696 01:04:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:20.696 01:04:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.696 01:04:55 -- common/autotest_common.sh@10 -- # set +x 00:28:20.696 ************************************ 00:28:20.696 START TEST keyring_linux 00:28:20.696 ************************************ 00:28:20.696 01:04:55 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:20.696 * Looking for test storage... 00:28:20.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:20.696 01:04:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:20.696 01:04:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.696 01:04:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:20.696 01:04:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.696 01:04:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.696 01:04:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.696 01:04:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.697 01:04:55 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.697 01:04:55 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.697 01:04:55 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.697 01:04:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.697 01:04:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.697 01:04:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.697 01:04:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:20.697 01:04:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:20.697 /tmp/:spdk-test:key0 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:20.697 01:04:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:20.697 01:04:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:20.697 /tmp/:spdk-test:key1 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2778633 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:20.697 01:04:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2778633 00:28:20.697 01:04:55 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2778633 ']' 00:28:20.697 01:04:55 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.697 01:04:55 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.697 01:04:55 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.697 01:04:55 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.697 01:04:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:20.955 [2024-07-16 01:04:55.491328] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:28:20.955 [2024-07-16 01:04:55.491407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778633 ] 00:28:20.955 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.955 [2024-07-16 01:04:55.547623] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.955 [2024-07-16 01:04:55.653834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.213 01:04:55 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.213 01:04:55 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:21.213 01:04:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:21.213 01:04:55 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.213 01:04:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:21.213 [2024-07-16 01:04:55.914752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.213 null0 00:28:21.213 [2024-07-16 01:04:55.946795] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:21.213 [2024-07-16 01:04:55.947304] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:21.213 01:04:55 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.213 01:04:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:21.213 896977613 00:28:21.213 01:04:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:21.472 310678516 00:28:21.472 01:04:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2778649 00:28:21.472 01:04:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2778649 /var/tmp/bperf.sock 00:28:21.472 01:04:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:21.472 01:04:55 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2778649 ']' 00:28:21.472 01:04:55 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.472 01:04:55 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.472 01:04:55 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.472 01:04:55 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.472 01:04:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:21.472 [2024-07-16 01:04:56.019548] Starting SPDK v24.09-pre git sha1 8c20d24e0 / DPDK 24.03.0 initialization... 00:28:21.472 [2024-07-16 01:04:56.019624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778649 ] 00:28:21.472 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.472 [2024-07-16 01:04:56.076381] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.472 [2024-07-16 01:04:56.193699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.472 01:04:56 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.472 01:04:56 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:21.472 01:04:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:21.472 01:04:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:21.731 01:04:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:21.731 01:04:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:22.297 01:04:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:22.297 01:04:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:22.297 [2024-07-16 01:04:57.027504] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:22.555 nvme0n1 00:28:22.555 01:04:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:22.555 01:04:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:22.555 01:04:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:22.555 01:04:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:22.555 01:04:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.555 01:04:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:22.813 01:04:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:22.813 01:04:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:22.813 01:04:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:22.813 01:04:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:22.813 01:04:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:22.813 01:04:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:22.813 01:04:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.071 01:04:57 keyring_linux -- keyring/linux.sh@25 -- # sn=896977613 00:28:23.071 01:04:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:23.071 01:04:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:23.071 01:04:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 896977613 == \8\9\6\9\7\7\6\1\3 ]] 00:28:23.071 01:04:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 896977613 00:28:23.071 01:04:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:23.071 01:04:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.071 Running I/O for 1 seconds... 00:28:24.006 00:28:24.006 Latency(us) 00:28:24.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.006 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:24.006 nvme0n1 : 1.02 3638.88 14.21 0.00 0.00 34771.45 6505.05 44079.03 00:28:24.006 =================================================================================================================== 00:28:24.006 Total : 3638.88 14.21 0.00 0.00 34771.45 6505.05 44079.03 00:28:24.006 0 00:28:24.298 01:04:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:24.298 01:04:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:24.298 01:04:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:24.298 01:04:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:24.298 01:04:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:24.298 01:04:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:24.298 01:04:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:24.298 01:04:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:24.555 01:04:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:24.555 01:04:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:24.555 01:04:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:24.555 01:04:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:24.555 01:04:59 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:24.555 01:04:59 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:24.555 01:04:59 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:24.555 01:04:59 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:24.555 01:04:59 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:24.555 01:04:59 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:24.555 01:04:59 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:24.555 01:04:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:24.812 [2024-07-16 01:04:59.497533] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:24.812 [2024-07-16 01:04:59.497789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18092b0 (107): Transport endpoint is not connected 00:28:24.812 [2024-07-16 01:04:59.498779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18092b0 (9): Bad file descriptor 00:28:24.812 [2024-07-16 01:04:59.499777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:24.812 [2024-07-16 01:04:59.499800] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:24.812 [2024-07-16 01:04:59.499815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:24.812 request: 00:28:24.812 { 00:28:24.812 "name": "nvme0", 00:28:24.812 "trtype": "tcp", 00:28:24.812 "traddr": "127.0.0.1", 00:28:24.812 "adrfam": "ipv4", 00:28:24.812 "trsvcid": "4420", 00:28:24.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.812 "prchk_reftag": false, 00:28:24.812 "prchk_guard": false, 00:28:24.812 "hdgst": false, 00:28:24.812 "ddgst": false, 00:28:24.812 "psk": ":spdk-test:key1", 00:28:24.812 "method": "bdev_nvme_attach_controller", 00:28:24.812 "req_id": 1 00:28:24.812 } 00:28:24.812 Got JSON-RPC error response 00:28:24.812 response: 00:28:24.812 { 00:28:24.812 "code": -5, 00:28:24.812 "message": "Input/output error" 00:28:24.812 } 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@33 -- # sn=896977613 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 896977613 00:28:24.812 1 links removed 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@33 -- # sn=310678516 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 310678516 00:28:24.812 1 links removed 00:28:24.812 01:04:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2778649 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2778649 ']' 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2778649 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2778649 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2778649' 00:28:24.812 killing process with pid 2778649 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@967 -- # kill 2778649 00:28:24.812 Received shutdown signal, test time was about 1.000000 seconds 00:28:24.812 00:28:24.812 Latency(us) 00:28:24.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.812 =================================================================================================================== 00:28:24.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.812 01:04:59 keyring_linux -- common/autotest_common.sh@972 -- # wait 2778649 00:28:25.069 01:04:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2778633 00:28:25.069 01:04:59 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2778633 ']' 00:28:25.069 01:04:59 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2778633 00:28:25.069 01:04:59 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:25.069 01:04:59 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.069 01:04:59 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2778633 00:28:25.326 01:04:59 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.326 01:04:59 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.326 01:04:59 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2778633' 00:28:25.326 killing process with pid 2778633 00:28:25.326 01:04:59 keyring_linux -- common/autotest_common.sh@967 -- # kill 2778633 00:28:25.326 01:04:59 keyring_linux -- common/autotest_common.sh@972 -- # wait 2778633 00:28:25.582 00:28:25.582 real 0m5.024s 00:28:25.582 user 0m9.314s 00:28:25.582 sys 0m1.466s 00:28:25.582 01:05:00 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.582 01:05:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:25.582 ************************************ 00:28:25.582 END TEST keyring_linux 00:28:25.582 ************************************ 00:28:25.838 01:05:00 -- common/autotest_common.sh@1142 -- # return 0 00:28:25.838 01:05:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:25.838 01:05:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:25.838 01:05:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:25.838 01:05:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:25.838 01:05:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:25.838 01:05:00 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:25.838 01:05:00 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:25.838 01:05:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.838 01:05:00 -- common/autotest_common.sh@10 -- # set +x 00:28:25.838 01:05:00 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:25.838 01:05:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:25.838 01:05:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:25.838 01:05:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.736 INFO: APP EXITING 00:28:27.736 INFO: killing all VMs 00:28:27.736 INFO: killing vhost app 00:28:27.736 INFO: EXIT DONE 00:28:28.670 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:28.670 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:28.670 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:28.670 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:28.670 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:28.670 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:28.670 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:28.670 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:28.670 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:28.670 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:28.670 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:28.670 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:28.670 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:28.670 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:28.670 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:28.670 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:28.670 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:30.045 Cleaning 00:28:30.045 Removing: /var/run/dpdk/spdk0/config 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:30.045 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:30.045 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:30.045 Removing: /var/run/dpdk/spdk1/config 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:30.045 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:30.045 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:30.045 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:30.045 Removing: /var/run/dpdk/spdk2/config 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:30.045 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:30.045 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:30.045 Removing: /var/run/dpdk/spdk3/config 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:30.045 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:30.045 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:30.045 Removing: /var/run/dpdk/spdk4/config 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:30.045 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:30.045 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:30.045 Removing: /dev/shm/bdev_svc_trace.1 00:28:30.045 Removing: /dev/shm/nvmf_trace.0 00:28:30.045 Removing: /dev/shm/spdk_tgt_trace.pid2516730 00:28:30.045 Removing: /var/run/dpdk/spdk0 00:28:30.045 Removing: /var/run/dpdk/spdk1 00:28:30.045 Removing: /var/run/dpdk/spdk2 00:28:30.045 Removing: /var/run/dpdk/spdk3 00:28:30.045 Removing: /var/run/dpdk/spdk4 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2515061 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2515802 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2516730 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2517169 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2517862 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2518002 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2518722 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2518858 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2519101 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2520296 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2521208 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2521521 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2521720 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2522021 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2522244 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2522401 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2522585 00:28:30.045 Removing: /var/run/dpdk/spdk_pid2522859 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2523047 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2525402 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2525692 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2525852 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2525861 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2526292 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2526430 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2526859 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2526864 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2527133 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2527172 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2527453 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2527478 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2528033 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2528235 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2528426 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2528596 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2528741 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2528813 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2529236 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2529721 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2529907 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2530179 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2530339 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2530503 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2530771 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2530934 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2531091 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2531363 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2531523 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2531684 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2531953 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2532119 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2532274 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2532548 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2532714 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2532932 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2533142 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2533310 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2533492 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2533698 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2535763 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2561781 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2564409 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2572004 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2575308 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2577804 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2578206 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2582178 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2586150 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2586155 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2586807 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2587353 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2588010 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2588419 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2588539 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2588679 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2588810 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2588819 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2589477 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2590018 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2590674 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2591076 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2591080 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2591344 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2592233 00:28:30.046 Removing: /var/run/dpdk/spdk_pid2592962 00:28:30.304 Removing: /var/run/dpdk/spdk_pid2598924 00:28:30.304 Removing: /var/run/dpdk/spdk_pid2599206 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2601838 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2605582 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2607719 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2614255 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2619449 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2620783 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2621446 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2631638 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2633970 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2659881 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2662790 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2663967 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2665282 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2665346 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2665444 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2665581 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2666013 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2667343 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2668080 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2668510 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2670161 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2670685 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2671132 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2673647 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2679546 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2682435 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2686211 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2687403 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2689128 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2691667 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2694037 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2698366 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2698377 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2701147 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2701280 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2701424 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2701774 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2701808 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2704570 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2704909 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2707564 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2709424 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2712957 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2716282 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2722637 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2727614 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2727647 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2739825 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2740360 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2740887 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2741302 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2741878 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2742388 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2742823 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2743234 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2745726 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2745874 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2749666 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2749836 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2751438 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2756476 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2756486 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2759381 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2761397 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2762804 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2763604 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2764945 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2765826 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2771220 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2771613 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2772004 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2773440 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2773843 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2774245 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2776675 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2776811 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2778153 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2778633 00:28:30.305 Removing: /var/run/dpdk/spdk_pid2778649 00:28:30.305 Clean 00:28:30.305 01:05:05 -- common/autotest_common.sh@1451 -- # return 0 00:28:30.305 01:05:05 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:30.305 01:05:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:30.305 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:28:30.305 01:05:05 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:30.305 01:05:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:30.305 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:28:30.305 01:05:05 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:30.305 01:05:05 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:30.305 01:05:05 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:30.305 01:05:05 -- spdk/autotest.sh@391 -- # hash lcov 00:28:30.305 01:05:05 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:30.305 01:05:05 -- spdk/autotest.sh@393 -- # hostname 00:28:30.305 01:05:05 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:30.563 geninfo: WARNING: invalid characters removed from testname! 00:29:02.643 01:05:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:02.643 01:05:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:05.180 01:05:39 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:08.541 01:05:42 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:11.067 01:05:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:14.343 01:05:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:16.872 01:05:51 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:16.872 01:05:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.872 01:05:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:16.872 01:05:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.872 01:05:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.872 01:05:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.872 01:05:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.872 01:05:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.872 01:05:51 -- paths/export.sh@5 -- $ export PATH 00:29:16.872 01:05:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.872 01:05:51 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:16.872 01:05:51 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:16.872 01:05:51 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721084751.XXXXXX 00:29:16.872 01:05:51 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721084751.flngEx 00:29:16.872 01:05:51 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:16.872 01:05:51 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:29:16.872 01:05:51 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:16.872 01:05:51 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:16.872 01:05:51 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:16.872 01:05:51 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:16.872 01:05:51 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:16.872 01:05:51 -- common/autotest_common.sh@10 -- $ set +x 00:29:17.131 01:05:51 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:17.131 01:05:51 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:17.131 01:05:51 -- pm/common@17 -- $ local monitor 00:29:17.131 01:05:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:17.131 01:05:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:17.131 01:05:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:17.131 01:05:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:17.131 01:05:51 -- pm/common@21 -- $ date +%s 00:29:17.131 01:05:51 -- pm/common@25 -- $ sleep 1 00:29:17.131 01:05:51 -- pm/common@21 -- $ date +%s 00:29:17.131 01:05:51 -- pm/common@21 -- $ date +%s 00:29:17.131 01:05:51 -- pm/common@21 -- $ date +%s 00:29:17.131 01:05:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084751 00:29:17.131 01:05:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084751 00:29:17.131 01:05:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084751 00:29:17.131 01:05:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084751 00:29:17.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084751_collect-vmstat.pm.log 00:29:17.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084751_collect-cpu-load.pm.log 00:29:17.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084751_collect-cpu-temp.pm.log 00:29:17.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084751_collect-bmc-pm.bmc.pm.log 00:29:18.069 01:05:52 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:18.069 01:05:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:18.069 01:05:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:18.069 01:05:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:18.069 01:05:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:18.069 01:05:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:18.069 01:05:52 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:18.069 01:05:52 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:18.069 01:05:52 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:18.069 01:05:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:18.069 01:05:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:18.069 01:05:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:18.069 01:05:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:18.069 01:05:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:18.069 01:05:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:18.069 01:05:52 -- pm/common@44 -- $ pid=2788387 00:29:18.069 01:05:52 -- pm/common@50 -- $ kill -TERM 2788387 00:29:18.069 01:05:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:18.069 01:05:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:18.069 01:05:52 -- pm/common@44 -- $ pid=2788389 00:29:18.070 01:05:52 -- pm/common@50 -- $ kill -TERM 2788389 00:29:18.070 01:05:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:18.070 01:05:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:18.070 01:05:52 -- pm/common@44 -- $ pid=2788391 00:29:18.070 01:05:52 -- pm/common@50 -- $ kill -TERM 2788391 00:29:18.070 01:05:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:18.070 01:05:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:18.070 01:05:52 -- pm/common@44 -- $ pid=2788419 00:29:18.070 01:05:52 -- pm/common@50 -- $ sudo -E kill -TERM 2788419 00:29:18.070 + [[ -n 2431382 ]] 00:29:18.070 + sudo kill 2431382 00:29:18.080 [Pipeline] } 00:29:18.098 [Pipeline] // stage 00:29:18.104 [Pipeline] } 00:29:18.123 [Pipeline] // timeout 00:29:18.127 [Pipeline] } 00:29:18.144 [Pipeline] // catchError 00:29:18.149 [Pipeline] } 00:29:18.167 [Pipeline] // wrap 00:29:18.173 [Pipeline] } 00:29:18.191 [Pipeline] // catchError 00:29:18.201 [Pipeline] stage 00:29:18.203 [Pipeline] { (Epilogue) 00:29:18.218 [Pipeline] catchError 00:29:18.220 [Pipeline] { 00:29:18.235 [Pipeline] echo 00:29:18.237 Cleanup processes 00:29:18.242 [Pipeline] sh 00:29:18.529 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:18.529 2788520 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:18.529 2788652 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:18.541 [Pipeline] sh 00:29:18.822 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:18.822 ++ grep -v 'sudo pgrep' 00:29:18.822 ++ awk '{print $1}' 00:29:18.822 + sudo kill -9 2788520 00:29:18.833 [Pipeline] sh 00:29:19.114 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:27.234 [Pipeline] sh 00:29:27.517 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:27.517 Artifacts sizes are good 00:29:27.534 [Pipeline] archiveArtifacts 00:29:27.542 Archiving artifacts 00:29:27.737 [Pipeline] sh 00:29:28.023 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:28.039 [Pipeline] cleanWs 00:29:28.058 [WS-CLEANUP] Deleting project workspace... 00:29:28.058 [WS-CLEANUP] Deferred wipeout is used... 00:29:28.066 [WS-CLEANUP] done 00:29:28.068 [Pipeline] } 00:29:28.092 [Pipeline] // catchError 00:29:28.107 [Pipeline] sh 00:29:28.392 + logger -p user.info -t JENKINS-CI 00:29:28.402 [Pipeline] } 00:29:28.420 [Pipeline] // stage 00:29:28.426 [Pipeline] } 00:29:28.441 [Pipeline] // node 00:29:28.445 [Pipeline] End of Pipeline 00:29:28.487 Finished: SUCCESS